Join us this Friday (4/19) for the Robotics Seminar!
Title: Enabling Cross-Embodiment Learning
Abstract:
In this talk, I will investigate the problem of learning manipulation
skills across a diverse set of robotic embodiments. Conventionally,
manipulation skills are learned separately for every task, environment
and robot. However, in domains like Computer Vision and Natural Language
Processing we have seen that one of the main contributing factors to
generalisable models is large amounts of diverse data. If we were able
to have one robot learn a new task even from data recorded with a
different robot, then we could already scale up training data to a much
larger degree for each robot embodiment. In this talk, I will present a
new, large-scale datasets that was put together across multiple industry
and academic research labs to make it possible to explore the
possibility of cross-embodiment learning in the context of robotic
manipulation, alongside experimental results that provide an example of
effective cross-robot policies. Given this dataset, I will also present
multiple alternative ways to learn cross-embodiment policies. These
example approaches will include (1) UniGrasp - a model that allows to
synthesise grasps with new hands, (2) XIRL - an approach to
automatically discover and learn vision-based reward functions from
cross-embodiment demonstration videos and (3) Equivact - an approach
that leverages equivariance to learn sensorimotor policies that
generalise to scenarios that are traditionally out-of-distribution.
Bio:
Jeannette Bohg is an Assistant Professor of Computer Science at
Stanford University. She was a group leader at the Autonomous Motion
Department (AMD) of the MPI for Intelligent Systems until September
2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD
student at the Division of Robotics, Perception and Learning (RPL) at
KTH in Stockholm. In her thesis, she proposed novel methods towards
multi-modal scene understanding for robotic grasping. She also studied
at Chalmers in Gothenburg and at the Technical University in Dresden
where she received her Master in Art and Technology and her Diploma in
Computer Science, respectively. Her research focuses on perception and
learning for autonomous robotic manipulation and grasping. She is
specifically interested in developing methods that are goal-directed,
real-time and multi-modal such that they can provide meaningful feedback
for execution and learning. Jeannette Bohg has received several Early
Career and Best Paper awards, most notably the 2019 IEEE Robotics and
Automation Society Early Career Award and the 2020 Robotics: Science and
Systems Early Career Award.