Skip to content

Latest commit

 

History

History
154 lines (92 loc) · 6.95 KB

README.md

File metadata and controls

154 lines (92 loc) · 6.95 KB

Multi-cam Multi-map Visual Inertial Localization: System, Validation and Dataset

Click to view our video


Introduction

The Multi-cam Multi-map Visual Inertial Localization (VILO) system is a high-performance, real-time localization solution designed for robotics. Unlike traditional VINS and SLAM systems that accumulate drift or rely on delayed corrections, VILO provides drift-free, causal localization directly integrated into the control loop, ensuring precise, instant feedback for autonomous operations in large, dynamic environments.

Key Features

  • Real-Time Localization and Mapping
    VILO provides accurate, drift-free pose estimates with support for both online and offline mapping. Real-time online mapping optimizes data collection on the fly, while offline processing enables the generation of high-precision, dense 3D maps through two-stage optimization.

  • Robust Multi-Cam Visual-Inertial Odometry (VIO)
    Utilizing multi-camera and IMU data, VILO’s VIO module includes resilient initialization and feature matching to maintain accurate localization even in challenging environments with high outlier rates.

  • Causal Evaluation Metrics
    VILO introduces tailored, causal evaluation metrics that assess localization accuracy in real time, eliminating the need for post-processing and enabling immediate feedback for live navigation and control.


Table of Contents


System

The VILO system is a multi-camera, multi-map visual-inertial localization solution designed for real-time, drift-free localization in dynamic environments.

System Architecture

Mapping-Mode

Captures multi-sensor data in real-time, providing immediate feedback for optimized map creation. Offline processing then generates high-precision, dense 3D reconstructions.

Mapping Mode Mapping Mode

Localization-Mode

Uses pre-built maps for accurate, consistent localization by combining multi-camera VIO with robust feature matching and outlier rejection, ensuring drift-free performance across diverse environments.

Localization Mode


Datasets

We offer a robust, nine-month dataset from Zhejiang University’s Zijingang Campus, spanning over 55 km and diverse real-world conditions. The dataset includes synchronized surround-view cameras, IMU, LiDAR, GPS, and INS data, making it an ideal resource for long-term testing of localization accuracy and robustness.

Hardware

Multi-sensor setup with synchronized cameras, IMU, LiDAR, GPS, and INS for robust data collection in dynamic environments.

Hardware Setup

Synchronization

Custom synchronization module ensures unified timestamps across sensors, critical for precise mapping and localization.

Synchronization Process

Collected Data

The dataset includes nine months of multi-sensor data from Zhejiang University’s Zijingang Campus, covering 55 km under diverse environmental conditions. This dataset provides a robust basis for testing VILO’s performance in real-world, long-term scenarios.

1. Multi-Camera Data Samples

Two front-facing stereo cameras and two fisheye cameras (left and right front) capture a wide field of view.

Multi-Camera

2. LiDAR Data Samples

A synchronized LiDAR-camera setup provides detailed spatial data.

LiDAR

3. Seasonal and Environmental Variations

Captured across multiple sessions, the dataset includes seasonal and environmental changes, showcasing differences in lighting, weather, and structural modifications across campus scenes.

Seasonal Variations


Links

VIO Code

Access the VIO part's code from Multi-cam-Multi-map-VIO.

Executable Files and Sample Maps for Map Relocalization

Access the full relocalization code from Baidu Netdisk.
Password: 85ym

Complete Dataset

Download the complete dataset from Baidu Netdisk.
Password: uu8t