CoRL publishes significant original research at the intersection of robotics and machine learning. CoRL is a selective, single-track international conference addressing theory and practice of machine learning for robots (and automation: where robot prototypes are scaled for cost effectiveness, efficiency, and reliability in practice). CoRL welcomes papers in areas such as:
- Use and development of reinforcement learning for control of physical robots
- Imitation learning for robotics, e.g. by behavioral cloning or inverse reinforcement learning
- Model-free learning for robot decision-making
- Bio-inspired robot learning and control
- Probabilistic learning and representation of uncertainty in robotics
- Model learning, i.e., learning for robot structure and system identification
- Robot state estimation, localization and mapping
- Learning for Robot Task and Motion Planning
- Learning for multimodal robot perception, sensor fusion, and robot vision
- Learning for human-robot interaction and robot instruction by natural language, gestures as well as alternative devices
- Applications of robot learning in robot manipulation, navigation, driving, flight, and other areas of robotics
- Robot systems, hardware, and sensors for learning and data-driven approaches
Submissions should focus on a core robotics problem and demonstrate the relevance of proposed models, algorithms, data sets, and benchmarks to robotics. Authors are encouraged to report real-robot experiments or provide convincing evidence that simulation experiments are transferable to real robots. Submissions without a robotics focus will be returned without review.
All submissions must include a limitations section, explicitly describing limiting assumptions, failure modes, and other limitations of the results and experiments and how these might be addressed in the future. Please include the limitation section in the main paper within the 8-page limit.
Authors are also encouraged to submit code and data as supplementary materials.
The manuscript template is available here.