Localization, mapping, visual place recognition and Simultaneous Localization And Mapping (SLAM) techniques are never the end, but rather a means to enable higher level tasks for robots and people alike. Major advances in localization capability have been made in the robotics, computer vision and machine learning fields, especially over the past two decades with the advent of mature SLAM systems and modern machine-learning driven approaches. Yet localization technology is still sparsely deployed in enduring large scale commercial applications, and despite the adage that “SLAM is solved”, for many applied roboticists it is abundantly clear that there are substantial challenges to still overcome.
Involving both researchers and end-users from industry, this workshop will focus on the key reasons we are developing localization and mapping systems, and use those insights to drive a reflection on the key methods by which we are approaching localization research. We will evaluate whether there are new innovations required in techniques, how we can improve the metrics and benchmarks by which we assess performance in the research field to make them better proxies of performance in actual deployed situations. To maximize inclusivity we are providing substantial funding support to support researchers from under-represented and lower socio-economic regions to attend and participate in the workshop.
Time | Paper Title | Authors (Presenter boldfaced) |
---|---|---|
09:45 | ConPR: Ongoing Construction Site Dataset for Place Recognition | Dongjae Lee, Minwoo Jung, Ayoung Kim |
09:52 | Learned Inertial Odometry for Autonomous Drone Racing | Giovanni Cioffi, Leonard Bauersfeld, Elia Kaufmann, Davide Scaramuzza |
10:00 | Alignability maps for the prediction and mitigation of localization error | Manuel Castellano-Quero, Tomasz Piotr Kucner, Martin Magnusson |
10:07 | Operational requirements for localization in autonomous vehicles | Arpan Kusari, Satabdi Saha |
10:15 | Sensor Localization by Few Distance Measurements via the Intersection of Implicit Manifolds | Michael Moshe Bilevich, Steven LaValle, Dan Halperin |
10:22 | Look Both Ways: Bidirectional Sensing for Automatic Multi-Camera Registration | Subodh Mishra, Sushruth Nagesh, Sagar Manglani, Shubham Shrivastava, Graham Mills, Punarjay Chakravarty, Gaurav Pandey |
- What are the specific use cases for full SLAM approaches, when is semi-supervised or collaborative SLAM 'enough', and when do we only need localization and/or visual place recognition?
- Are the current performance metrics we use like Recall@X sufficient for enabling real-world utility? What better performance metrics could we design and support as a community?
- How do we currently benchmark localization systems, and is our reliance in the research community on passive dataset-based testing hurting us in the long run? What could we do better, including the use of simulation and real-robot benchmark testing platforms?
- Are our research goals as a field too focused on beating the previous state-of-the-art by a few percent? What other goals could we better pursue, like generality?
- Viewpoint- and appearance-invariance have emerged as two of the key themes shaping much vision-based localization research... is this the right categorization?
- Localization is a vibrant field across robotics, computer vision and machine learning fields - is this a good thing, bad thing, or somewhere in the middle?
We invite you to submit high-quality extended abstracts, aligned with the theme of our workshop. Also, see competition and financial support.
- Novel or updated evaluation metrics
- Encoding equivariance or invariance in place representations
- Semantics based localization
- Natural language, localization and navigation
- Large language models for place recognition and localization
- Foundation models for localization
- Neural implicit representations and models for mapping and localization
- Long-term autonomy
- Sequences/videos for place recognition and localization
- VPR for SfM versus VPR for SLAM
- Impact of VPR on the performance of SfM and SLAM
- From place recognition and 6-DoF localization to robot navigation
- SLAM vs Localization-only (given the map)
- New benchmarks and datasets
We invite you to submit high-quality research either as a 2-page extended abstract or a 4-page short paper. Page counts exclude references (i.e., 2 + n and 4 + n). You are encouraged to use IROS's suggested Latex format and upload a PDF (see below). The review process will be single blind, that is, the authors' names are not required to be anonymized, aligned with IROS paper submissions. We encourae submissions of work-in-progress and work that is not yet published.
Accepted papers will be presented as posters, with a selected few in the spotlight lightning session.
Please upload your paper through OpenReview. For extended abstracts, you can write N/A in the abstract field when creating a submission on OpenReview. Please use the TLDR field in the submission to indicate whether you are submitting "new work" or it is an "abridged version of a parallel/accepted submission". These papers will be publicly accessible through the workshop webpage in a non-archival format, thus allowing future submission to archival venues. At least one author must be registered to attend IROS 2023 workshops to present their work (see registration).
[Due 23:59 UTC-0]
Event | Date |
---|---|
Paper Submission Open | 28 Jun 2023 |
Paper Submission Due | 24 Aug 2023 |
Reviews Out | 08 Sep 2023 |
Camera-Ready Due | 20 Sep 2023 |
Workshop Day | 01 Oct 2023 |
From decades, place recognition has been applied to a range of localization and navigation tasks, but only a few methods have been proposed for large scale map assembling. On the other hand, with the development of autonomous driving, last mile delivery and multi agent cooperation, there is a huge demand for efficient and accurate large scale, crowd-sourced map updating. In this competition, General Place Recognition (GPR) for Autonomous Map Assembling, we provide a comprehensive evaluation platform of large scale LiDAR/IMU datasets, repeatedly collected at different times in a variety of environments (city/park/indoor), with varying overlaps. The target is to assemble the joint large scale map based mainly on the place recognition ability without any GPS assistance.
We invite you to participate in the competition led by Peng Yin (CityU HK) and Sebastian Scherer (CMU). The winners will have the opportunity to present their work at this workshop. The challenge timeline is as below:
[Due 23:59 UTC-0]
Event | Date |
---|---|
Release Initial Dataset & Eval Tools | 01 Aug 2023 |
Release Final Competition Set | 15 Sep 2023 |
Submission Close | 24 Sep 2023 |
Winners Notified | 25 Sep 2023 |
Winners Presentations | 01 Oct 2023 |
The workshop will provide substantial prizes in the following categories:
- Best Overall Presentation Award, sponsored by Nvidia:
- 1 Jetson Orin + RTX 4090 GPU and
- Jetson Nano to each co-author to a max of 5 authors.
- USD 500 - Runner-up Paper Presentation Award, given to the presenter at the lightning session.
- USD 500 - Runner-up Poster Presentation Award, given to the presenter at the poster sessions.
- USD 200 - Most engaging speaker amongst our invited speakers.
- USD 200 - Most active participant, actively engaging throughout the workshop event.
We aim to provide opportunities for all researchers to be able to attend and foster further research in this area. We are proposing this scholarship program for researchers from under-represented geographic regions and demographics, totaling USD 3,500, which they can use for:
- funding IROS 2023 workshop registration fees to enable attendance at this workshop
- travel grants providing partial or full support for travel to attend the physical conference
- hardware support including GPUs
- software license support to help with conducting research in this area
Please use this form to apply for this support by 20 Aug 2023 (23:59 UTC-0) (You will be informed of the outcome by 24 Aug 2023). Due to limited capacity, we cannot guarantee supporting everyone, but we encourage you to apply as it will only take a few minutes.