Speakers


Industry Keynote

Steven Saunders, Robotics Plus

Steven Saunders is an Ag entrepreneur, has 36 years’ experience in the horticultural sector, international ventures, applied technology, environmental research & development, innovation, and science. Steven is the CEO and founder of Robotics Plus ltd (Named in the Thrive top 50 global AgTech Companies, RBR50 most innovative Robotic companies in 2020 and Forbes Asia top 100 companies to watch 2021). Founder and Director of WNT Ventures, a tech incubator and a founding member of PlantTech Research Institute, both Private sector – Government initiatives. He is the co-founder of Newnham Park Innovation Centre in Tauranga New Zealand, an active Angel investor, a shareholder Director of many privately-owned companies and tech startups. A founding shareholder and Director of Rockit Global, Miro Limited (Māori Berry collective) and MPAC (kiwifruit post-harvest). Thinking big and looking to solve global, scalable problems has been integral to Steve’s success – a philosophy that is well expressed in his favourite whakatauki: He rangi tā matawhāiti, he rangi tā matawhānui – A person with a narrow vision sees little opportunity, A person with a wide vision sees plentiful opportunities.

Keynotes

Aude Billard, EPFL

Today, many would like to deploy robots everywhere: in the streets, as cars, wheelchairs, and other mobility devices; in our homes, to cook, clean, and entertain us; on the body, to replace a lost limb or to augment its capabilities. For these robots to become reality, they need to depart from their ancestors in one dramatic way: They must escape from the comfortable, secluded, and largely predictable industrial world. In the past decades, robotics has made leaps forward in the design of increasingly complex robotic platforms to meet these challenges. In this endeavor, it has benefited from advances in optimization for solving high-dimensional constrained problems. These methods are powerful for planning in slow-paced tasks and when the environment is known.  Advances in machine learning (ML) to analyze vast amounts of data often have offered powerful solutions for real-time control, but they often fall short of providing explicit guarantees on the learned model.

This talk will give an overview of a variety of methods to endow robots with the necessary reactivity to adapt their path at time-critical situations. Online reactivity is not just a matter of ensuring that there is a good-enough central processing unit (CPU) on board the robot. It requires inherently robust control laws that can provide multiple solutions. We will see  methods by which robots can learn control laws from only a handful of examples, while generalizing to the entire state space. The learned control laws are accompanied by theoretical guarantees for stability and boundedness. The methods combine planning and ML to learn feasible control laws, retrievable at run time with no need for further optimization. The talk is largely based on our Textbook and its accompanying video and coding examples:

Aude Billard, Sina Mirrazavi, & Nadia Figueroa. (2022). Learning for Adaptive and Reactive Robot Control : A Dynamical Systems Approach. The MIT Press.

Aude Billard is head of the LASA laboratory at the School of Engineering at the Swiss Institute of Technology Lausanne (EPFL). A.B. holds a B.Sc and M.Sc. in Physics from EPFL (1995) and a Ph.D. in Artificial Intelligence (1998) from the University of Edinburgh. A.B. was the recipient of the Intel Corporation Teaching award, the Swiss National Science Foundation career award, the Outstanding Young Person in Science and Innovation from the Swiss Chamber of Commerce, the IEEE-RAS Best Reviewer Award and IEEE-RAS Distinguished Service Award. A. B.’s research spans the fields of machine learning and robotics with a particular emphasis on learning from sparse data and performing fast and robust retrieval. This work finds application to robotics, human-robot / human-computer interaction and computational neuroscience. This research received best paper awards from IEEE T-RO, RSS, ICRA, IROS, Humanoids and ROMAN and was featured in premier venues (BBC, IEEE Spectrum, Wired).

Aleksandra Faust, Google Brain Research

Training autonomous agents and systems that perform complex tasks in a variety of real-work environments remains a challenge. While reinforcement learning is a promising technique, training RL agents is an expensive, human-in-the-loop process, requiring heavy engineering and often resulting in suboptimal results. In this talk we explore two main directions toward scalable reinforcement learning and autonomy. First, we discuss several methods for zero-shot sim2real transfer for mobile and aerial navigation, including visual navigation and fully autonomous navigation on a severely resource constrained nano UAV. Second, we observe that the interaction between the human engineer and the agent under training as a decision-making process that the human agent performs, and consequently automate the training by learning a decision making policy. With that insight, we focus on zero-shot generalization and discuss a compositional task curriculum that generalizes to unseen tasks of evolving complexity. We show that across different applications, learning methods improve reinforcement learning agents generalization and performance, and raise questions about nurture vs nature in training autonomous systems.

Aleksandra Faust is a Senior Staff Research Scientist and Reinforcement Learning research team co-founder at Google Brain Research. Previously, Aleksandra founded and led Task and Motion Planning research in Robotics at Google, machine learning for self-driving car planning and controls in Waymo. She earned a Ph.D. in Computer Science at the University of New Mexico (with distinction), and a Master’s in Computer Science from the University of Illinois at Urbana-Champaign. Her research interests include learning for safe and scalable autonomy, reinforcement learning, learning to learn for autonomous systems. Aleksandra won IEEE RAS Early Career Award for Industry, the Tom L. Popejoy Award for the best doctoral dissertation at the University of New Mexico in the period of 2011-2014, and was named Distinguished Alumna by the University of New Mexico School of Engineering. Her work has been featured in the New York Times, PC Magazine, ZdNet, VentureBeat, and was awarded Best Paper in Service Robotics at ICRA, Best Paper in Reinforcement Learning for Real Life (RL4RL) at ICML, and Best Paper of IEEE Computer Architecture Letters.

Bruce MacDonald, University of Auckland

The presentation will summarize robotics research and industry in NZ, and research on robots as assistants for people in the Centre for Automation and Robotics Engineering Sciences (CARES) at the University of Auckland. Including the roadmap for NZ Robotics, Automation, and Sensing (NZRAS), And including robot companions to support older people and others who need support, and robots to help people in the horticulture industry, such as kiwifruit harvesting, grape vine pruning. etc.

Bruce MacDonald completed a BE (1st class) and PhD in the Electrical Engineering Department of the University of Canterbury. After working with NZ Electricity for three years and a year at the DSIR in Wellington, he moved to Canada and spent ten years in the Computer Science Department of the University of Calgary. Returning to New Zealand in 1995, he joined the Department of Electrical and Computer Engineering at the University of Auckland. He helped set up a new programme in computer systems engineering. Bruce also started the Robotics Laboratory. His long term goal is to design intelligent robotic assistants that improve the quality of people’s lives, with primary research interests in human robot interaction and robot programming systems, and applications in areas such as healthcare and agriculture. He is the director of the department’s robotics group and the leader for the multidisciplinary CARES robotics team at the University of Auckland. He is vice-Chairman for NZ’s robotics, automation and sensing association. For NZ’s national science challenge Science for Technological Innovation, he is the theme leader for Sensors, Robotics and Automation and deputy director. One of his current research programmes is to develop robots to help care for people, which is a multidisciplinary project undertaken jointly with Korean researchers and companies. Another current research project is for orchard robotics, a joint project with NZ researchers and companies.


Tutorials

Takayuki Osa, University of Tokyo

In the real world, there are often diverse solutions to a problem. For example, when walking at a certain speed, there can be infinitely many walking styles to achieve the specified speed. The existence of multiple solutions is often originated from the multiple extrema in an objective function in robot learning. While discovering a single solution may be sufficient to solve the problem at hand, extracting diverse behaviors from a single task could be an efficient way to obtain skill repertoires for downstream tasks. In this talk, we will discuss methods for discovering diverse solutions in the context of motion planning and reinforcement learning. We will first discuss the problem formulation for discovering diverse solutions in motion planning and how to solve it. Next, we will discuss reinforcement learning methods for discovering diverse policies. This talk aims to provide basics of discovering diverse solutions in robot learning and share our experience through case studies. 

Takayuki Osa is an Associate Professor at the University of Tokyo and a visiting researcher at RIKEN Center for Advanced Intelligence Project (AIP). Before joining UTokyo in June 2022, he was an Associate Professor at the Kyushu Institute of Technology. From April 2017 to February 2019, he was a project assistant professor at the University of Tokyo. Takayuki Osa received his Ph.D. in Engineering from the University of Tokyo in 2015. From 2015 to 2017, he was a postdoctoral researcher at the Technical University of Darmstadt. His research focuses on motion planning, reinforcement learning, and imitation learning. He is interested in developing algorithms that leverage the structure and priors in robot learning.

Fabio Ramos, University of Sydney / NVIDIA

Recent advancements in simulation technology enable the fast development of decision-making algorithms that can be tested and tuned in simulation before they are deployed to the real world. However, what happens when the simulator does not represent reality well? Can they still be useful? In this presentation I address this question from a probabilistic inference perspective and describe a number of techniques that allow us to “invert” simulators and compute distributions over simulator parameters given real data. We start with classical simulation-based inference methods where the simulation process is assumed expensive, to efficient and scalable particle-based inference methods suitable for parallel and differentiable simulators. Finally, I present Bayesian domain randomization as a powerful tool to learn policies in simulation that are robust to real world deployments in problems involving autonomous navigation in different terrains, manipulation of granular materials and deformable objects. 

Fabio Ramos is a Professor in robotics and machine learning at the School of Computer Science at the  University of Sydney and a Principal Research Scientist at NVIDIA. He received the BSc and MSc degrees in Mechatronics Engineering at University of Sao Paulo, Brazil, and the PhD degree at the University of Sydney, Australia. His research focuses on statistical machine learning techniques for large-scale Bayesian inference and decision making with applications in robotics, mining, environmental monitoring and healthcare. Between 2008 and 2011 he led the research team that designed the first autonomous open-pit iron mine in the world. He has over 150 peer-review publications and received Best Paper Awards and Student Best Paper Awards at several conferences including International Conference on Intelligent Robots and Systems (IROS), Australasian Conference on Robotics and Automation (ACRA), European Conference on Machine Learning (ECML), and Robotics Science and Systems (RSS). 

Jeannette Bohg, Stanford University

Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.