New Benchmarks, Metrics, and Competitions for Robotic Learning

Organizers: Niko Suenderhauf, Markus Wulfmeier, Anelia Angelova, Ken Goldberg and Feras Dayoub


This workshop will discuss and propose new benchmarks, competitions, and performance metrics that address the specific challenges arising when deploying (deep) learning in robotics. Researchers in robotics currently lack widely-accepted meaningful benchmarks and competitions that inspire the community to work on the critical research challenges for robotic learning, and allow repeatable experiments and quantitative evaluation. This is in stark contrast to computer vision, where datasets like ImageNet and COCO, and the associated competitions, fueled much of the advances in recent years. This workshop will therefore bring together experts from the robotics, machine learning, and computer vision communities to identify the shortcomings of existing benchmarks, datasets, and evaluation metrics. We will discuss the critical challenges for learning in robotic perception, planning, and control that are not well covered by the existing benchmarks, and combine the results of these discussions to outline new benchmarks for learning in robotic perception, planning, and control. The new proposed benchmarks shall complement existing benchmark competitions and be run annually in conjunction with conferences such as RSS, CoRL, ICRA, NIPS, or CVPR. They will help to close the gap between robotics, computer vision, and machine learning communities, and will foster crucial advancements in machine learning for robotics.