CS Distinguished Colloquium Speaker John Jumper: Monday, September 25 at 4:30
Speaker: John Jumper, DeepMind Date: Monday, September 25 Time: 4:30pm to 5:30pm Location: Friend Center 101 Host: Ellen Zhong Event page: [ https://www.cs.princeton.edu/events/26500 | https://www.cs.princeton.edu/events/26500 ] Title: Highly accurate protein structure prediction with deep learning Abstract: Our work on deep learning for biology, specifically the AlphaFold system, has demonstrated that neural networks are capable of highly accurate modeling of both protein structure and protein-protein interactions. In particular, the system shows a remarkable ability to extract chemical and evolutionary principles from experimental structural data. This computational tool has repeatedly shown the ability to not only predict accurate structures for novel sequences and novel folds but also to do unexpected tasks such as selecting stable protein designs or detecting protein disorder. In this lecture, I will discuss the context of this breakthrough in the machine learning principles, the diverse data and rigorous evaluation environment that enabled it to occur, and the many innovative ways in which the community is using these tools to do new types of science. I will also reflect on some surprising limitations -- insensitivity to mutations and the lack of context about the chemical environment of the proteins -- and how this may be traced back to the essential features of the training process. Finally, I will conclude with a discussion of some ideas on the future of machine learning in structure biology and how the experimental and computational communities can think about organizing their research and data to enable many more such breakthroughs in the future. Bio: John Jumper received his PhD in Chemistry from the University of Chicago, where he developed machine learning methods to simulate protein dynamics. Prior to that, he worked at D.E. Shaw Research on molecular dynamics simulations of protein dynamics and supercooled liquids. He also holds an MPhil in Physics from the University of Cambridge and a B.S. in Physics and Mathematics from Vanderbilt University. At DeepMind, John is leading the development of new methods to apply machine learning to protein biology.
Speaker: John Jumper, DeepMind Date: Monday, September 25 Time: 4:30pm to 5:30pm Location: Friend Center 101 Host: Ellen Zhong Event page: [ https://www.cs.princeton.edu/events/26500 | https://www.cs.princeton.edu/events/26500 ] Title: Highly accurate protein structure prediction with deep learning Abstract: Our work on deep learning for biology, specifically the AlphaFold system, has demonstrated that neural networks are capable of highly accurate modeling of both protein structure and protein-protein interactions. In particular, the system shows a remarkable ability to extract chemical and evolutionary principles from experimental structural data. This computational tool has repeatedly shown the ability to not only predict accurate structures for novel sequences and novel folds but also to do unexpected tasks such as selecting stable protein designs or detecting protein disorder. In this lecture, I will discuss the context of this breakthrough in the machine learning principles, the diverse data and rigorous evaluation environment that enabled it to occur, and the many innovative ways in which the community is using these tools to do new types of science. I will also reflect on some surprising limitations -- insensitivity to mutations and the lack of context about the chemical environment of the proteins -- and how this may be traced back to the essential features of the training process. Finally, I will conclude with a discussion of some ideas on the future of machine learning in structure biology and how the experimental and computational communities can think about organizing their research and data to enable many more such breakthroughs in the future. Bio: John Jumper received his PhD in Chemistry from the University of Chicago, where he developed machine learning methods to simulate protein dynamics. Prior to that, he worked at D.E. Shaw Research on molecular dynamics simulations of protein dynamics and supercooled liquids. He also holds an MPhil in Physics from the University of Cambridge and a B.S. in Physics and Mathematics from Vanderbilt University. At DeepMind, John is leading the development of new methods to apply machine learning to protein biology.
Speaker: Oded Regev, Courant Institute of Mathematical Sciences Date: Tuesday, October 10 Time: 12:30pm Location: CS 105 Host: Ran Raz Event page: [ https://www.cs.princeton.edu/events/26501 | https://www.cs.princeton.edu/events/26501 ] Title: Lattice-Based Cryptography and the Learning with Errors Problem Abstract: Most cryptographic protocols in use today are based on number theoretic problems such as integer factoring. I will give an introduction to lattice-based cryptography, a form of cryptography offering many advantages over the traditional number-theoretic-based ones, including conjectured security against quantum computers. The talk will mainly focus on the so-called Learning with Errors (LWE) problem. This problem has turned out to be an amazingly versatile basis for lattice-based cryptographic constructions, with hundreds of applications. I will also mention work on making cryptographic constructions highly efficient using algebraic number theory (leading to a NIST standard and implementation in browsers such as Chrome), as well as some recent applications to machine learning. The talk will be accessible to a wide audience. Bio: Oded Regev is a Silver Professor in the Courant Institute of Mathematical Sciences of New York University. He received his Ph.D. in computer science from Tel Aviv University in 2001 under the supervision of Yossi Azar, continuing to a postdoctoral fellowship at the Institute for Advanced Study. He is a recipient of the 2019 Simons Investigator award, the 2018 Gödel Prize, several best paper awards, and was a speaker at the 2022 International Congress of Mathematicians. His main research areas include theoretical computer science, RNA biology, quantum computation, and machine learning.
Speaker: Oded Regev, Courant Institute of Mathematical Sciences Date: Tuesday, October 10 Time: 12:30pm Location: CS 105 Host: Ran Raz Event page: [ https://www.cs.princeton.edu/events/26501 | https://www.cs.princeton.edu/events/26501 ] Title: Lattice-Based Cryptography and the Learning with Errors Problem Abstract: Most cryptographic protocols in use today are based on number theoretic problems such as integer factoring. I will give an introduction to lattice-based cryptography, a form of cryptography offering many advantages over the traditional number-theoretic-based ones, including conjectured security against quantum computers. The talk will mainly focus on the so-called Learning with Errors (LWE) problem. This problem has turned out to be an amazingly versatile basis for lattice-based cryptographic constructions, with hundreds of applications. I will also mention work on making cryptographic constructions highly efficient using algebraic number theory (leading to a NIST standard and implementation in browsers such as Chrome), as well as some recent applications to machine learning. The talk will be accessible to a wide audience. Bio: Oded Regev is a Silver Professor in the Courant Institute of Mathematical Sciences of New York University. He received his Ph.D. in computer science from Tel Aviv University in 2001 under the supervision of Yossi Azar, continuing to a postdoctoral fellowship at the Institute for Advanced Study. He is a recipient of the 2019 Simons Investigator award, the 2018 Gödel Prize, several best paper awards, and was a speaker at the 2022 International Congress of Mathematicians. His main research areas include theoretical computer science, RNA biology, quantum computation, and machine learning. This talk will be recorded and live streamed via Zoom. Please register for the webinar here: [ https://princeton.zoom.us/webinar/register/WN_SInthQZsRIGq2b7b7Hqx2A | https://princeton.zoom.us/webinar/register/WN_SInthQZsRIGq2b7b7Hqx2A ]
Speaker: Jun-Yan Zhu, from Carnegie Mellon University Date: Thursday, November 30 Time: 12:30pm Location: CS 105 Host: Jia Deng Event page: [ https://www.cs.princeton.edu/events/26534 | https://www.cs.princeton.edu/events/26534 ] Title: Enabling Collaboration between Creators and Generative Models Abstract: Large-scale generative visual models, such as DALL·E2 and Stable Diffusion, have made content creation as little effort as writing a short text description. Meanwhile, these models also spark concerns among artists, designers, and photographers about job security and proper credit for their contributions to the training data. This leads to many questions: Will generative models make creators’ jobs obsolete? Should creators stop publicly sharing their work? Should we ban generative models altogether? In this talk, I argue that human creators and generative models can coexist. To achieve it, we need to involve creators in the loop of both model inference and model training while crediting their efforts for their involvement. I will first explore our recent efforts in model rewriting, which allows creators to freely control the model’s behavior by adding, altering, or removing concepts and rules. I will demonstrate several applications, including creating new visual effects, customizing models with multiple personal concepts, and removing copyrighted content. I will then discuss our data attribution algorithm for assessing the influence of each training image for a generated sample. Collectively, we aim to allow creators to leverage the models while retaining control over the creation process and data ownership. Bio: Jun-Yan Zhu is an Assistant Professor at CMU’s School of Computer Science. Prior to joining CMU, he was a Research Scientist at Adobe Research and a postdoc at MIT CSAIL. He obtained his Ph.D. from UC Berkeley and B.E. from Tsinghua University. He studies computer vision, computer graphics, and computational photography. His current research focuses on generative models for visual storytelling. He has received the Packard Fellowship, the NSF CAREER Award, the ACM SIGGRAPH Outstanding Doctoral Dissertation Award, and the UC Berkeley EECS David J. Sakrison Memorial Prize for outstanding doctoral research, among other awards. This talk will be recorded and live streamed via Zoom. Register for webinar here: [ https://princeton.zoom.us/webinar/register/WN_GKHBDQKFQ7uCcsuLxSqnfQ | https://princeton.zoom.us/webinar/register/WN_GKHBDQKFQ7uCcsuLxSqnfQ ]
Speaker: [ https://ece.duke.edu/faculty/yiran-chen | Yiran Chen ] , from Duke University Date: Monday, December 4 Time: 12:30pm Location: CS 105 Host: Kai Li Event page: [ https://www.cs.princeton.edu/events/26536 | https://www.cs.princeton.edu/events/26536 ] Title: AI Models for Edge Computing: Hardware-aware Optimizations for Efficiency Abstract: As artificial intelligence (AI) transforms various industries, state-of-the-art models have exploded in size and capability. The growth in AI model complexity is rapidly outstripping hardware evolution, making the deployment of these models on edge devices remain challenging. To enable advanced AI locally, models must be optimized for fitting into the hardware constraints. In this presentation, we will first discuss how computing hardware designs impact the effectiveness of commonly used AI model optimizations for efficiency, including techniques like quantization and pruning. Additionally, we will present several methods, such as hardware-aware quantization and structured pruning, to demonstrate the significance of software/hardware co-design. We will also demonstrate how these methods can be understood via a straightforward theoretical framework, facilitating their seamless integration in practical applications and their straightforward extension to distributed edge computing. At the conclusion of our presentation, we will share our insights and vision for achieving efficient and robust AI at the edge. Bio: Yiran Chen received his B.S. (1998) and M.S. (2001) degrees from Tsinghua University and his Ph.D. (2005) from Purdue University. After spending five years in the industry, he joined the University of Pittsburgh in 2010 as an Assistant Professor and was promoted to Associate Professor with tenure in 2014, holding the Bicentennial Alumni Faculty Fellow position. He currently serves as the John Cocke Distinguished Professor of Electrical and Computer Engineering at Duke University. He is also the director of the NSF AI Institute for Edge Computing Leveraging Next-generation Networks (Athena), the NSF Industry-University Cooperative Research Center (IUCRC) for Alternative Sustainable and Intelligent Computing (ASIC), and the co-director of the Duke Center for Computational Evolutionary Intelligence (DCEI). His group's research focuses on new memory and storage systems, machine learning and neuromorphic computing, and mobile computing systems. Dr. Chen has published one book, more than 600 technical publications, and has been granted 96 US patents. He has received 11 Ten-Year Retrospective Influential Paper Awards, Outstanding Paper Awards, Best Paper Awards, and Best Student Paper Awards, as well as 2 best poster awards and 15 best paper nominations from various international journals, conferences, and workshops. He has been honored with numerous awards for his technical contributions and professional services, including the IEEE CASS Charles A. Desoer Technical Achievement Award and the IEEE Computer Society Edward J. McCluskey Technical Achievement Award. He has been a distinguished lecturer for IEEE CEDA and CAS, is a Fellow of the AAAS, ACM, and IEEE, and currently serves as the chair of ACM SIGDA and the Editor-in-Chief of the IEEE Circuits and Systems Magazine. He is a founding member of the steering committee of the Academic Alliance on AI Policy (AAAIP). This talk will be recorded and live streamed via Zoom. Register for webinar here: https://princeton.zoom.us/webinar/register/WN_aYpJ8GvLQey8v7zy9FFZqw
Speaker: [ https://ece.duke.edu/faculty/yiran-chen | Yiran Chen ] , from Duke University Date: Monday, December 4 Time: 12:30pm Location: CS 105 Host: Kai Li Event page: [ https://www.cs.princeton.edu/events/26536 | https://www.cs.princeton.edu/events/26536 ] Title: AI Models for Edge Computing: Hardware-aware Optimizations for Efficiency Abstract: As artificial intelligence (AI) transforms various industries, state-of-the-art models have exploded in size and capability. The growth in AI model complexity is rapidly outstripping hardware evolution, making the deployment of these models on edge devices remain challenging. To enable advanced AI locally, models must be optimized for fitting into the hardware constraints. In this presentation, we will first discuss how computing hardware designs impact the effectiveness of commonly used AI model optimizations for efficiency, including techniques like quantization and pruning. Additionally, we will present several methods, such as hardware-aware quantization and structured pruning, to demonstrate the significance of software/hardware co-design. We will also demonstrate how these methods can be understood via a straightforward theoretical framework, facilitating their seamless integration in practical applications and their straightforward extension to distributed edge computing. At the conclusion of our presentation, we will share our insights and vision for achieving efficient and robust AI at the edge. Bio: Yiran Chen received his B.S. (1998) and M.S. (2001) degrees from Tsinghua University and his Ph.D. (2005) from Purdue University. After spending five years in the industry, he joined the University of Pittsburgh in 2010 as an Assistant Professor and was promoted to Associate Professor with tenure in 2014, holding the Bicentennial Alumni Faculty Fellow position. He currently serves as the John Cocke Distinguished Professor of Electrical and Computer Engineering at Duke University. He is also the director of the NSF AI Institute for Edge Computing Leveraging Next-generation Networks (Athena), the NSF Industry-University Cooperative Research Center (IUCRC) for Alternative Sustainable and Intelligent Computing (ASIC), and the co-director of the Duke Center for Computational Evolutionary Intelligence (DCEI). His group's research focuses on new memory and storage systems, machine learning and neuromorphic computing, and mobile computing systems. Dr. Chen has published one book, more than 600 technical publications, and has been granted 96 US patents. He has received 11 Ten-Year Retrospective Influential Paper Awards, Outstanding Paper Awards, Best Paper Awards, and Best Student Paper Awards, as well as 2 best poster awards and 15 best paper nominations from various international journals, conferences, and workshops. He has been honored with numerous awards for his technical contributions and professional services, including the IEEE CASS Charles A. Desoer Technical Achievement Award and the IEEE Computer Society Edward J. McCluskey Technical Achievement Award. He has been a distinguished lecturer for IEEE CEDA and CAS, is a Fellow of the AAAS, ACM, and IEEE, and currently serves as the chair of ACM SIGDA and the Editor-in-Chief of the IEEE Circuits and Systems Magazine. He is a founding member of the steering committee of the Academic Alliance on AI Policy (AAAIP). This talk will be recorded and live streamed via Zoom. Register for webinar here: [ https://princeton.zoom.us/webinar/register/WN_aYpJ8GvLQey8v7zy9FFZqw | https://princeton.zoom.us/webinar/register/WN_aYpJ8GvLQey8v7zy9FFZqw ]
CS Distinguished Colloquium Speaker: [ https://www.bpccenter.org/people/dr-kinnis-gosha | Dr. Kinnis Gosha ] , from Morehouse College Date: Thursday, January 18 Time: 12:30pm Location: CS 402 Host: Olga Russakovsky Event page: [ https://www.cs.princeton.edu/events/26556 | https://www.cs.princeton.edu/events/26556 ] Title: Building Symbiotic Collaborations with HBCU STEM Faculty and Departments Abstract: This talk will strategically discuss HBCU STEM departments and how to engage them. Possessing firsthand experience with more than 18 awarded NSF grants in the last eight years, Dr. Kinnis Gosha will facilitate a candid discussion. Topics include (1) Relationship building with HBCU faculty, students, and administrators as subject matter experts, research collaborators, and colleagues. (2) Intentional budgets, publications, and measurable outcomes aligned with HBCU research goals and strategic plans. (3) What works and rarely works when collaborating on proposals, implementing projects, reporting data, and evaluating programs. Participants in this session will also leave with an HBCU CS Engagement Checklist to facilitate symbiotic partnerships. Bio: Dr. Kinnis Gosha (pronounced Go-Shay) is the Hortinius I. Chenault Endowed Professor of Computer Science, Academic Program Director for Software Engineering, and Executive Director of the Morehouse Center for Broadening Participation in Computing. Dr. Gosha’s research interests include conversational AI, social media data analytics, computer science education, broadening participation in computing, and culturally relevant computing. Gosha also leads Morehouse’s Software Engineering degree program, where he builds collaborations with industry partners to provide his students with a variety of experiential learning experiences. In October of 2022, Gosha took over as the Principal Investigator of the Institute for African-American Mentoring in Computing Sciences (IAAMCS), a Broadening Participation in Computing Alliance, funded by the National Science Foundation. To date, 21 undergraduate researchers in his lab have gone on to pursue a doctoral degree in computing. PI Gosha currently has over 60 peer-reviewed publications in the area of Broadening Participation in Computing (BPC). Since arriving at Morehouse (2011), he has included undergraduate student researchers as co-authors in 26 peer-reviewed manuscripts. Gosha is very active in the BPC community serving as a regular paper and poster reviewer for the Tapia, SIGCSE, and RESPECT conferences. Currently, Gosha is the Co-Chair of the IEEE Special Technical Community for Broadening Participation and a newly elected board member of both the Computing Research Association and the National Science Foundation Computer and Information Science and Engineering (CISE) Advisory Committee. This talk will not be live streamed or recorded.
CS Distinguished Colloquium Speaker: [ https://www.bpccenter.org/people/dr-kinnis-gosha | Dr. Kinnis Gosha ] , from Morehouse College Date: Thursday, January 18 Time: 12:30pm Location: CS 402 Host: Olga Russakovsky Event page: [ https://www.cs.princeton.edu/events/26556 | https://www.cs.princeton.edu/events/26556 ] Title: Building Symbiotic Collaborations with HBCU STEM Faculty and Departments Abstract: This talk will strategically discuss HBCU STEM departments and how to engage them. Possessing firsthand experience with more than 18 awarded NSF grants in the last eight years, Dr. Kinnis Gosha will facilitate a candid discussion. Topics include (1) Relationship building with HBCU faculty, students, and administrators as subject matter experts, research collaborators, and colleagues. (2) Intentional budgets, publications, and measurable outcomes aligned with HBCU research goals and strategic plans. (3) What works and rarely works when collaborating on proposals, implementing projects, reporting data, and evaluating programs. Participants in this session will also leave with an HBCU CS Engagement Checklist to facilitate symbiotic partnerships. Bio: Dr. Kinnis Gosha (pronounced Go-Shay) is the Hortinius I. Chenault Endowed Professor of Computer Science, Academic Program Director for Software Engineering, and Executive Director of the Morehouse Center for Broadening Participation in Computing. Dr. Gosha’s research interests include conversational AI, social media data analytics, computer science education, broadening participation in computing, and culturally relevant computing. Gosha also leads Morehouse’s Software Engineering degree program, where he builds collaborations with industry partners to provide his students with a variety of experiential learning experiences. In October of 2022, Gosha took over as the Principal Investigator of the Institute for African-American Mentoring in Computing Sciences (IAAMCS), a Broadening Participation in Computing Alliance, funded by the National Science Foundation. To date, 21 undergraduate researchers in his lab have gone on to pursue a doctoral degree in computing. PI Gosha currently has over 60 peer-reviewed publications in the area of Broadening Participation in Computing (BPC). Since arriving at Morehouse (2011), he has included undergraduate student researchers as co-authors in 26 peer-reviewed manuscripts. Gosha is very active in the BPC community serving as a regular paper and poster reviewer for the Tapia, SIGCSE, and RESPECT conferences. Currently, Gosha is the Co-Chair of the IEEE Special Technical Community for Broadening Participation and a newly elected board member of both the Computing Research Association and the National Science Foundation Computer and Information Science and Engineering (CISE) Advisory Committee. This talk will not be live streamed or recorded.
CS Colloquium Speaker: [ https://www.cs.yale.edu/homes/abhishek/ | Abhishek Bhattacharjee ] , Yale University Date: Monday, Feb 12 Time: 12:30pm Location: CS 105 Host: Kai Li, Margaret Martonosi, Jonathan Cohen Event page: [ https://www.cs.princeton.edu/events/26566 | https://www.cs.princeton.edu/events/26566 ] Zoom registration link: [ https://princeton.zoom.us/webinar/register/WN_5tdWExCiT1Sd7uqCXHLbPg | https://princeton.zoom.us/webinar/register/WN_5tdWExCiT1Sd7uqCXHLbPg ] *Note, the webinar is only available to Princeton University students, faculty, and staff. Title: Balancing Heterogeneity and Programmability Across Computing Scales Abstract: Hardware heterogeneity is everywhere, from the high-performance server chips that comprise our data centers to the milliwatt-scale chips on board our biomedical devices. The central thesis of my talk is that hardware heterogeneity breaks through traditional computing abstractions to enable orders of magnitude performance improvements, but that these performance improvements are useful to software developers only when hardware continues to remain easy to program. I will discuss ongoing research in my group on balancing hardware heterogeneity with abstractions/interfaces to enable programmability/flexibility. As exemplars of this question, I will focus on the benefits and challenges of building shared address spaces between general-purpose CPUs and domain-specific hardware accelerators. I will also discuss my work on building flexible neural interfaces driven by a collection of programmable ASICs. My talk will highlight cross-cutting lessons learned and their implications on future accelerator-rich computer systems. Bio: Abhishek Bhattacharjee is a Professor of Computer Science at Yale University, and is also affiliated with Yale's Wu Tsai Institute for the Brain Sciences as well as Yale's Center for Brain & Mind Health. He is interested in the hardware/software interface. Abhishek's research on address translation has shipped in over one billion AMD Zen CPU cores, over 50 million NVIDIA GPUs (starting with their Ampere line), over two billion Linux kernel downloads, and has also helped the group tasked with deciding the RISC-V page table format. For these contributions, Abhishek was the recipient of the 2023 ACM SIGARCH Maurice Wilkes Award. Abhishek teaches courses on computer architecture, operating systems, and compilers. In recognition of his teaching and mentoring of undergraduate and graduate students, Abhishek was the recipient of the 2022 Yale Engineering Ackerman Award.
CS Colloquium Speaker: [ https://www.cs.yale.edu/homes/abhishek/ | Abhishek Bhattacharjee ] , Yale University Date: Monday, Feb 12 Time: 12:30pm Location: CS 105 Host: Kai Li, Margaret Martonosi, Jonathan Cohen Event page: [ https://www.cs.princeton.edu/events/26566 | https://www.cs.princeton.edu/events/26566 ] Zoom registration link: [ https://princeton.zoom.us/webinar/register/WN_5tdWExCiT1Sd7uqCXHLbPg | https://princeton.zoom.us/webinar/register/WN_5tdWExCiT1Sd7uqCXHLbPg ] *Note, the webinar is only available to Princeton University students, faculty, and staff. Title: Balancing Heterogeneity and Programmability Across Computing Scales Abstract: Hardware heterogeneity is everywhere, from the high-performance server chips that comprise our data centers to the milliwatt-scale chips on board our biomedical devices. The central thesis of my talk is that hardware heterogeneity breaks through traditional computing abstractions to enable orders of magnitude performance improvements, but that these performance improvements are useful to software developers only when hardware continues to remain easy to program. I will discuss ongoing research in my group on balancing hardware heterogeneity with abstractions/interfaces to enable programmability/flexibility. As exemplars of this question, I will focus on the benefits and challenges of building shared address spaces between general-purpose CPUs and domain-specific hardware accelerators. I will also discuss my work on building flexible neural interfaces driven by a collection of programmable ASICs. My talk will highlight cross-cutting lessons learned and their implications on future accelerator-rich computer systems. Bio: Abhishek Bhattacharjee is a Professor of Computer Science at Yale University, and is also affiliated with Yale's Wu Tsai Institute for the Brain Sciences as well as Yale's Center for Brain & Mind Health. He is interested in the hardware/software interface. Abhishek's research on address translation has shipped in over one billion AMD Zen CPU cores, over tens of millions of NVIDIA GPUs, over two billion Linux kernel downloads, and has also helped the group tasked with deciding the RISC-V page table format. For these contributions, Abhishek was the recipient of the 2023 ACM SIGARCH Maurice Wilkes Award. Abhishek teaches courses on computer architecture, operating systems, and compilers. In recognition of his teaching and mentoring of undergraduate and graduate students, Abhishek was the recipient of the 2022 Yale Engineering Ackerman Award.
Speaker: Jun-Yan Zhu, from Carnegie Mellon University Date: Thursday, November 30 Time: 12:30pm Location: CS 105 Host: Jia Deng Event page: [ https://www.cs.princeton.edu/events/26534 | https://www.cs.princeton.edu/events/26534 ] Title: Enabling Collaboration between Creators and Generative Models Abstract: Large-scale generative visual models, such as DALL·E2 and Stable Diffusion, have made content creation as little effort as writing a short text description. Meanwhile, these models also spark concerns among artists, designers, and photographers about job security and proper credit for their contributions to the training data. This leads to many questions: Will generative models make creators’ jobs obsolete? Should creators stop publicly sharing their work? Should we ban generative models altogether? In this talk, I argue that human creators and generative models can coexist. To achieve it, we need to involve creators in the loop of both model inference and model training while crediting their efforts for their involvement. I will first explore our recent efforts in model rewriting, which allows creators to freely control the model’s behavior by adding, altering, or removing concepts and rules. I will demonstrate several applications, including creating new visual effects, customizing models with multiple personal concepts, and removing copyrighted content. I will then discuss our data attribution algorithm for assessing the influence of each training image for a generated sample. Collectively, we aim to allow creators to leverage the models while retaining control over the creation process and data ownership. Bio: Jun-Yan Zhu is an Assistant Professor at CMU’s School of Computer Science. Prior to joining CMU, he was a Research Scientist at Adobe Research and a postdoc at MIT CSAIL. He obtained his Ph.D. from UC Berkeley and B.E. from Tsinghua University. He studies computer vision, computer graphics, and computational photography. His current research focuses on generative models for visual storytelling. He has received the Packard Fellowship, the NSF CAREER Award, the ACM SIGGRAPH Outstanding Doctoral Dissertation Award, and the UC Berkeley EECS David J. Sakrison Memorial Prize for outstanding doctoral research, among other awards. This talk will be recorded and live streamed via Zoom. Register for webinar here: [ https://princeton.zoom.us/webinar/register/WN_GKHBDQKFQ7uCcsuLxSqnfQ | https://princeton.zoom.us/webinar/register/WN_GKHBDQKFQ7uCcsuLxSqnfQ ]
Speaker: [ https://www.vincentsitzmann.com/ | Vincent Sitzmann ] , MIT Date: Thursday, October 26 Time: 4:30pm Location: CS 105 Host: Ellen Zhong Event page: https://www.cs.princeton.edu/events/26523 Title: 3D-aware Representation Learning for Vision Abstract: Given only a single picture, people are capable of inferring a mental representation that encodes rich information about the underlying 3D scene. We acquire this skill not through massive labeled datasets of 3D scenes, but through self-supervised observation and interaction. Building machines that can infer similarly rich neural scene representations is critical if they are to one day parallel people’s ability to understand, navigate, and interact with their surroundings. In my talk, I will discuss how this motivates a 3D approach to self-supervised learning for vision. I will then present recent advances of my research group towards enabling us to train self-supervised scene representation learning methods at scale, on uncurated video without pre-computed camera poses. I will further present recent advances towards modeling of uncertainty in 3D scenes, as well as progress on endowing neural scene representations with more semantic, high-level information. Bio: Vincent is an Assistant Professor at MIT EECS, where he is leading the Scene Representation Group. Previously, he finished his Ph.D. at Stanford University. He is interested in the self-supervised training of 3D-aware vision models: His goal is to train models that, given a single image or short video, can reconstruct a representation of the underlying scene that incodes information about materials, affordance, geometry, lighting, etc, a task that is simple for humans, but currently impossible for AI. This talk will be live streamed via Zoom. Register for webinar here: [ https://princeton.zoom.us/webinar/register/WN_Lsg1-_M_TKO6swnGD43lTA | https://princeton.zoom.us/webinar/register/WN_Lsg1-_M_TKO6swnGD43lTA ]
Speaker: [ https://www.vincentsitzmann.com/ | Vincent Sitzmann ] , MIT Date: Thursday, October 26 Time: 4:30pm Location: CS 105 Host: Ellen Zhong Event page: https://www.cs.princeton.edu/events/26523 Title: 3D-aware Representation Learning for Vision Abstract: Given only a single picture, people are capable of inferring a mental representation that encodes rich information about the underlying 3D scene. We acquire this skill not through massive labeled datasets of 3D scenes, but through self-supervised observation and interaction. Building machines that can infer similarly rich neural scene representations is critical if they are to one day parallel people’s ability to understand, navigate, and interact with their surroundings. In my talk, I will discuss how this motivates a 3D approach to self-supervised learning for vision. I will then present recent advances of my research group towards enabling us to train self-supervised scene representation learning methods at scale, on uncurated video without pre-computed camera poses. I will further present recent advances towards modeling of uncertainty in 3D scenes, as well as progress on endowing neural scene representations with more semantic, high-level information. Bio: Vincent is an Assistant Professor at MIT EECS, where he is leading the Scene Representation Group. Previously, he finished his Ph.D. at Stanford University. He is interested in the self-supervised training of 3D-aware vision models: His goal is to train models that, given a single image or short video, can reconstruct a representation of the underlying scene that incodes information about materials, affordance, geometry, lighting, etc, a task that is simple for humans, but currently impossible for AI. This talk will be live streamed via Zoom. Register for webinar here: [ https://princeton.zoom.us/webinar/register/WN_Lsg1-_M_TKO6swnGD43lTA | https://princeton.zoom.us/webinar/register/WN_Lsg1-_M_TKO6swnGD43lTA ]
participants (1)
-
Emily C. Lawrence