Submission Deadline: 15 June 2022
Paper Submission Requirements and Instructions
Papers are submitted through OpenReview, via the link at the top of this page. All submissions should comply with the format and length indicated below. CoRL is double-blind, which means all papers must be anonymized. The submitted papers and reviews will be publicly accessible, but only accepted papers will be de-anonymized. Submitted papers will be reviewed by at least two reviewers. Accepted papers will appear in the Proceedings of Machine Learning Research (formerly JMLR Workshop and Conference Proceedings).
Paper submission open: May 15, 2022
Paper submission deadline: June 15, 2022; 23:59 Pacific Time (UTC-7)
Supplemental materials due: June 22, 2022; 23:59 Pacific Time (UTC-7)
Editorial Rejection: June 26, 2022
Reviews available: August 15, 2022
Discussion period: August 15-26, 2022
Paper acceptance notifications: September 10, 2022
Camera ready papers due: November 1, 2022
Submission Requirements and Instructions
Submissions are due June 15, 2022, 11:59PM Pacific Time, but supplemental materials can be submitted up to a week later, by June 22, 2022,11:59PM Pacific Time. To do so, there is a “Supplementary Material” button in the OpenReview forum of the submitted papers.
The page limit is 8 pages plus n pages for references (8+n pages). Authors will have the option to submit a supplementary file containing further details, which the reviewers may decide to consult, as well as a supplementary video. All supplementary materials will be submitted through OpenReview as a single zip file.
All accepted papers will be presented in poster sessions, while selected papers will be invited for an oral spotlight presentation.
Submissions will be evaluated based on the significance and novelty of the results, either theoretical or empirical. Results will be judged on the degree to which they have been objectively established and/or their potential for scientific and technological impact, as well as their relevance to robotic learning. Submissions should focus on a core robotics problem and demonstrate the relevance of proposed models, algorithms, data sets, and benchmarks to robotics. Authors are encouraged to report real-robot experiments or provide convincing evidence that simulation experiments are transferable to real robots. Papers with both experimental and theoretical results relevant to robot learning are welcome, however, submissions without a robotics focus will be returned without review. Our intent is to make CoRL a selective top-tier conference on robotic learning.
All submissions must include a limitations section, explicitly describing limiting assumptions, failure modes, and other limitations of the results and experiments and how these might be addressed in the future.
Authors will have an opportunity to submit a response to reviewers and update the papers during the discussion period. Reviews and discussion of accepted papers will be made publicly available.
Desk Reject Criteria
Process: ACs will identify the desk rejection candidates, using one of the criteria below as a justification. PC will examine the candidates and make the final decision. We will err on the side of caution, and only desk reject papers when there is a consensus between all PCs and the AC.
The paper can be desk rejected for one of the three reasons: formatting issues, anonymity violation, or scope.
Formatting issues — paper is either too long, or in an incorrect format.
Anonymity violation — the main manuscript, supplemental materials, or a link provided in a paper identifies one or more of the authors.
Missing or insufficient limitations section — all papers are required to have an honest and sufficiently encompassing limitations section.
Scope: All CoRL submissions must demonstrate the relevance to Robot Learning through Intent—explicitly address a learning question for physical robots, or Outcome—test the proposed learning solution on physical robots.
- No learning: Manually design and tune the performance of a robot controller without use of learning.
- No learning: A search algorithm for model-based planning.
- No robotics: A generic result on sample complexity.
- No robotics: A generic RL algorithm.
- Little robotics: Improved performance on a standard CV dataset, e.g., ImageNet recognition.
- Insufficient algorithm evaluation quality: A RL algorithm evaluated with a low number of random seeds (e.g., fewer than 5 seeds) in a stochastic scenario which drastically alters performance (see Hendersen et al. Deep Reinforcement Learning that Matters, 2019, https://arxiv.org/pdf/1709.06560.pdf) for further examples.
- An algorithm that was only evaluated in simulation without credible evidence on the possibility of transfer to a real robot learning due to sim2real problems or data efficiency.
We will not accept papers that are identical or substantially similar to papers that have previously been published or accepted for publication in an archival venue, nor papers submitted in parallel to other conferences or archival venues. Archival venues include conferences and journals with formally published proceedings, but do not include non-archival workshops. Submission is permitted for papers that have previously appeared only as a technical report, e.g. in arXiv.
Software Submission Instructions
Authors are encouraged to submit code alongside the paper. Authors should provide a readme file explaining how to run the author’s software, and, when applicable, how to use it to replicate experimental results given in the article. For code that include files not directly relevant to the scientific contribution of the paper, authors should indicate in the readme file which part of the code pertains to the scientific claims of the paper to ease the review process. Please verify that the submitted code abides to the same anonymity standard as the paper.
By default and unless authors specify a different license scheme, the code submitted along the paper will be protected under exclusive copyright linked to the paper ID. Reviewers will be strictly forbidden to use the code outside the review process.
Use of Code / Citation / Licensing
Be aware that you must always cite your sources, including in code you may be using for your research. Failing to do so may lead others to believe that you are the authors of the code, which would be considered as plagiarism. Authors are requested to explicitly cite sources in the code header and in the readme file.
Authors must also ensure that they have a license to modify or use other people’s code. See https://choosealicense.com/no-permission/ for information on how to act when you find code on the web that does not have a specific license.