Aude Billard, EPFL
Title: “Machine Learning Methods for Real-time Control with Theoretical Guarantees”
Today, many would like to deploy robots everywhere: in the streets, as cars, wheelchairs, and other mobility devices; in our homes, to cook, clean, and entertain us; on the body, to replace a lost limb or to augment its capabilities. For these robots to become reality, they need to depart from their ancestors in one dramatic way: They must escape from the comfortable, secluded, and largely predictable industrial world. In the past decades, robotics has made leaps forward in the design of increasingly complex robotic platforms to meet these challenges. In this endeavor, it has benefited from advances in optimization for solving high-dimensional constrained problems. These methods are powerful for planning in slow-paced tasks and when the environment is known. Advances in machine learning (ML) to analyze vast amounts of data often have offered powerful solutions for real-time control, but they often fall short of providing explicit guarantees on the learned model.
This talk will give an overview of a variety of methods to endow robots with the necessary reactivity to adapt their path at time-critical situations. Online reactivity is not just a matter of ensuring that there is a good-enough central processing unit (CPU) on board the robot. It requires inherently robust control laws that can provide multiple solutions. We will see methods by which robots can learn control laws from only a handful of examples, while generalizing to the entire state space. The learned control laws are accompanied by theoretical guarantees for stability and boundedness. The methods combine planning and ML to learn feasible control laws, retrievable at run time with no need for further optimization. The talk is largely based on our Textbook and its accompanying video and coding examples:
Aude Billard, Sina Mirrazavi, & Nadia Figueroa. (2022). Learning for Adaptive and Reactive Robot Control : A Dynamical Systems Approach. The MIT Press.
Prof. Aude Billard is head of the LASA laboratory at the School of Engineering at the Swiss Institute of Technology Lausanne (EPFL). A.B. holds a B.Sc and M.Sc. in Physics from EPFL (1995) and a Ph.D. in Artificial Intelligence (1998) from the University of Edinburgh. A.B. was the recipient of the Intel Corporation Teaching award, the Swiss National Science Foundation career award, the Outstanding Young Person in Science and Innovation from the Swiss Chamber of Commerce, the IEEE-RAS Best Reviewer Award and IEEE-RAS Distinguished Service Award. A. B.’s research spans the fields of machine learning and robotics with a particular emphasis on learning from sparse data and performing fast and robust retrieval. This work finds application to robotics, human-robot / human-computer interaction and computational neuroscience. This research received best paper awards from IEEE T-RO, RSS, ICRA, IROS, Humanoids and ROMAN and was featured in premier venues (BBC, IEEE Spectrum, Wired).
Aleksandra Faust, Google Brain Research
Title: “Towards Scalable Autonomy”
Training autonomous agents and systems that perform complex tasks in a variety of real-work environments remains a challenge. While reinforcement learning is a promising technique, training RL agents is an expensive, human-in-the-loop process, requiring heavy engineering and often resulting in suboptimal results. In this talk we explore two main directions toward scalable reinforcement learning and autonomy. First, we discuss several methods for zero-shot sim2real transfer for mobile and aerial navigation, including visual navigation and fully autonomous navigation on a severely resource constrained nano UAV. Second, we observe that the interaction between the human engineer and the agent under training as a decision-making process that the human agent performs, and consequently automate the training by learning a decision making policy. With that insight, we focus on zero-shot generalization and discuss a compositional task curriculum that generalizes to unseen tasks of evolving complexity. We show that across different applications, learning methods improve reinforcement learning agents generalization and performance, and raise questions about nurture vs nature in training autonomous systems.
Aleksandra Faust is a Senior Staff Research Scientist and Reinforcement Learning research team co-founder at Google Brain Research. Previously, Aleksandra founded and led Task and Motion Planning research in Robotics at Google, machine learning for self-driving car planning and controls in Waymo. She earned a Ph.D. in Computer Science at the University of New Mexico (with distinction), and a Master’s in Computer Science from the University of Illinois at Urbana-Champaign. Her research interests include learning for safe and scalable autonomy, reinforcement learning, learning to learn for autonomous systems. Aleksandra won IEEE RAS Early Career Award for Industry, the Tom L. Popejoy Award for the best doctoral dissertation at the University of New Mexico in the period of 2011-2014, and was named Distinguished Alumna by the University of New Mexico School of Engineering. Her work has been featured in the New York Times, PC Magazine, ZdNet, VentureBeat, and was awarded Best Paper in Service Robotics at ICRA, Best Paper in Reinforcement Learning for Real Life (RL4RL) at ICML, and Best Paper of IEEE Computer Architecture Letters.
Frank Dellaert, Georgia Institute of Technology
Title: “Hybrid Factor Graphs for Motion Planning and Action Recognition”
Frank Dellaert is a Professor at Georgia Tech’s School of Interactive Computing, and a research scientist at Google AI. He has previously done stints at Skydio, a drone startup, and Facebook Reality Labs. His work is on sensor fusion and the use of large-scale graphical models for robotics sensing, thinking, and acting. With his students and collaborators, he created the popular sensor fusion/SLAM library GTSAM, see gtsam.org.
Bruce MacDonald, University of Auckland
Title: “Robotics in NZ”
The presentation will summarize robotics research and industry in NZ, and research on robots as assistants for people in the Centre for Automation and Robotics Engineering Sciences (CARES) at the University of Auckland. Including the roadmap for NZ Robotics, Automation, and Sensing (NZRAS), And including robot companions to support older people and others who need support, and robots to help people in the horticulture industry, such as kiwifruit harvesting, grape vine pruning. etc.
Prof. Bruce MacDonald completed a BE (1st class) and PhD in the Electrical Engineering Department of the University of Canterbury. After working with NZ Electricity for three years and a year at the DSIR in Wellington, he moved to Canada and spent ten years in the Computer Science Department of the University of Calgary. Returning to New Zealand in 1995, he joined the Department of Electrical and Computer Engineering at the University of Auckland. He helped set up a new programme in computer systems engineering. Bruce also started the Robotics Laboratory. His long term goal is to design intelligent robotic assistants that improve the quality of people’s lives, with primary research interests in human robot interaction and robot programming systems, and applications in areas such as healthcare and agriculture. He is the director of the department’s robotics group and the leader for the multidisciplinary CARES robotics team at the University of Auckland. He is vice-Chairman for NZ’s robotics, automation and sensing association. For NZ’s national science challenge Science for Technological Innovation, he is the theme leader for Sensors, Robotics and Automation and deputy director. One of his current research programmes is to develop robots to help care for people, which is a multidisciplinary project undertaken jointly with Korean researchers and companies. Another current research project is for orchard robotics, a joint project with NZ researchers and companies.
Takayuki Osa, KyuTech
Title: “Dealing with the objective function with multiple extrema in robot learning”
Takayuki Osa is an Associate Professor at the Kyushu Institute of Technology (KyuTech) and a visiting researcher at the RIKEN Center for Advanced Intelligence Project. Before joining KyuTech in March 2019, he was an Assistant Professor at the University of Tokyo. Takayuki Osa received a Ph.D. degree in Mechanical Engineering from the University of Tokyo, in 2015. From 2015 to 2017, he was a postdoctoral researcher at the Technical University of Darmstadt. His research focuses on motion planning, reinforcement learning, and imitation learning. He is interested in developing algorithms that leverage the structure and priors in robot learning.
Fabio Ramos, University of Sydney / NVIDIA
Title: “Simulation-Based Probabilistic Inference for Domain Randomization and Sim2Real Transfer”
Fabio Ramos is a Professor in robotics and machine learning at the School of Computer Science at the University of Sydney and a Principal Research Scientist at NVIDIA. He received the BSc and MSc degrees in Mechatronics Engineering at University of Sao Paulo, Brazil, and the PhD degree at the University of Sydney, Australia. His research focuses on statistical machine learning techniques for large-scale Bayesian inference and decision making with applications in robotics, mining, environmental monitoring and healthcare. Between 2008 and 2011 he led the research team that designed the first autonomous open-pit iron mine in the world. He has over 150 peer-review publications and received Best Paper Awards and Student Best Paper Awards at several conferences including International Conference on Intelligent Robots and Systems (IROS), Australasian Conference on Robotics and Automation (ACRA), European Conference on Machine Learning (ECML), and Robotics Science and Systems (RSS).
Jeannette Bohg, Stanford University
Title: “Representations and Representation Learning in Robotics”
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.