diff --git a/PREPARE_DATA.md b/PREPARE_DATA.md
index 469358d..c53ba9c 100644
--- a/PREPARE_DATA.md
+++ b/PREPARE_DATA.md
@@ -1,7 +1,7 @@
## IPBLab dataset
Downloading IPBLab dataset from our server:
```shell
-cd ir-mcl && mkdir data
+cd ir-mcl && mkdir data && cd data
wget https://www.ipb.uni-bonn.de/html/projects/kuang2023ral/ipblab.zip
unzip ipblab.zip
```
@@ -35,7 +35,7 @@ There is one sequence available for the localization experiments now, the full d
## Intel Lab datatse, Freiburg Building 079 dataset, and MIT CSAIL dataset
Downloading these three classical indoor 2D SLAM datasets from our server:
```shell
-cd ir-mcl && mkdir data
+cd ir-mcl && mkdir data && cd data
wget https://www.ipb.uni-bonn.de/html/projects/kuang2023ral/2dslam.zip
unzip 2dslam.zip
```
diff --git a/README.md b/README.md
index 22f0a74..f4fe48b 100644
--- a/README.md
+++ b/README.md
@@ -57,7 +57,7 @@ The code was tested with Ubuntu 20.04 with:
conda install -c conda-forge pybind11
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu116
- pip install pytorch-lightning
+ pip install pytorch-lightning tensorboardX
pip install matplotlib scipy open3d
pip install evo --upgrade --no-binary evo
```
@@ -117,21 +117,21 @@ Due to the space limitation of the paper, we provide some experimental results a
### Memery cost
We provide an ablation study on the memory cost between the occupancy grid map (OGM), Hilbert map, and our neural occupancy field (NOF).
-| Maps type | Approximate memory | Loc. method | RMSE: location (cm) / yaw (degree) |
-|----------------------|--------------------|--------------------------|----------------------------------------|
-| OGM (5cm grid size) | 4.00MB | AMCL
NMCL
SRRG-Loc | 11.11/4.15
19.57/3.62
8.74/1.68 |
-| OGM (10cm grid size) | 2.00MB | AMCL
NMCL
SRRG-Loc | 15.01/4.18
36.27/4.04
12.15/1.53 |
-| Hilbert Map | 0.01MB | HMCL | 20.04/4.50 |
-| NOF | 1.96NB | IR-MCL | **6.62**/**1.11** |
+| Maps type | Approximate memory | Loc. method | RMSE: location (cm) / yaw (degree) |
+|:----------------------|:--------------------:|:--------------------------:|:--------------------------------------------:|
+| OGM (5cm grid size) | 4.00MB | AMCL
NMCL
SRRG-Loc | 11.11 / 4.15
19.57 / 3.62
8.74 / 1.68 |
+| OGM (10cm grid size) | 2.00MB | AMCL
NMCL
SRRG-Loc | 15.01 / 4.18
36.27 / 4.04
12.15 / 1.53 |
+| Hilbert Map | 0.01MB | HMCL | 20.04 / 4.50 |
+| NOF | 1.96NB | IR-MCL | **6.62** / **1.11** |
### Ablation study on fixed particle numbers
We also provide the experiment to study the performance of global localization under the same particle numbers for all methods. We fixed the number of particles to 100,000. In the below table, all baselines and IR-MCL∗ always use 100,000 particles. IR-MCL is shown for reference.
-| Method | RMSE: location (cm) / yaw (degree) |
-|-------------------------------------------------|----------------------------------------------------------------------|
-| AMCL
NMCL
HMCL
SRRG-Loc
IR-MCL∗ | 11.56/4.12
19.57/3.62
20.54/4.70
8.74/1.68
6.71/**1.11** |
-| IR-MCL | **6.62**/**1.11** |
+| Method | RMSE: location (cm) / yaw (degree) |
+|:-------------------------------------------------------:|:------------------------------------------------------------------------------:|
+| AMCL
NMCL
HMCL
SRRG-Loc
IR-MCL∗ | 11.56 / 4.12
19.57 / 3.62
20.54 / 4.70
8.74 / 1.68
6.71 / **1.11** |
+| IR-MCL | **6.62** / **1.11** |
## Citation
diff --git a/environment.yml b/environment.yml
index 8d71f25..99f4457 100644
--- a/environment.yml
+++ b/environment.yml
@@ -27,6 +27,7 @@ dependencies:
- xz=5.2.10=h5eee18b_1
- zlib=1.2.13=h5eee18b_0
- pip:
+ - --extra-index-url https://download.pytorch.org/whl/cu116
- addict==2.4.0
- aiohttp==3.8.3
- aiosignal==1.3.1
@@ -57,7 +58,7 @@ dependencies:
- fsspec==2023.1.0
- idna==3.4
- importlib-metadata==6.0.0
- - ipykernel==6.20.2
+ - ipykernel==6.21.1
- ipython==8.9.0
- ipywidgets==8.0.4
- itsdangerous==2.1.2
@@ -90,6 +91,7 @@ dependencies:
- platformdirs==2.6.2
- plotly==5.13.0
- prompt-toolkit==3.0.36
+ - protobuf==3.20.1
- psutil==5.9.4
- ptyprocess==0.7.0
- pure-eval==0.2.2
@@ -113,6 +115,7 @@ dependencies:
- six==1.16.0
- stack-data==0.6.2
- tenacity==8.1.0
+ - tensorboardx==2.5.1
- threadpoolctl==3.1.0
- torch==1.13.1+cu116
- torchmetrics==0.11.1
diff --git a/shells/pretraining/fr079.sh b/shells/pretraining/fr079.sh
index ec5dae0..550bb0b 100644
--- a/shells/pretraining/fr079.sh
+++ b/shells/pretraining/fr079.sh
@@ -1,4 +1,3 @@
-cd ~/ir-mcl
python train.py \
--root_dir ./data/fr079 --N_samples 256 --perturb 1 --noise_std 0 --L_pos 10 \
--feature_size 256 --use_skip --seed 42 --batch_size 1024 --chunk 262144 \
diff --git a/shells/pretraining/intel.sh b/shells/pretraining/intel.sh
index 0ff2071..9116ebb 100755
--- a/shells/pretraining/intel.sh
+++ b/shells/pretraining/intel.sh
@@ -1,4 +1,3 @@
-cd ~/ir-mcl
python train.py \
--root_dir ./data/intel --N_samples 1024 --perturb 1 \
--noise_std 0 --L_pos 10 --feature_size 256 --use_skip --seed 42 \
diff --git a/shells/pretraining/ipblab.sh b/shells/pretraining/ipblab.sh
index 33eebf1..40250d0 100644
--- a/shells/pretraining/ipblab.sh
+++ b/shells/pretraining/ipblab.sh
@@ -1,4 +1,3 @@
-cd ~/ir-mcl
python train.py \
--root_dir ./data/ipblab --N_samples 256 --perturb 1 --noise_std 0 --L_pos 10 \
--feature_size 256 --use_skip --seed 42 --batch_size 1024 --chunk 262144 \
diff --git a/shells/pretraining/mit.sh b/shells/pretraining/mit.sh
index 36f9725..0b28182 100644
--- a/shells/pretraining/mit.sh
+++ b/shells/pretraining/mit.sh
@@ -1,4 +1,3 @@
-cd ~/ir-mcl
python train.py \
--root_dir ./data/mit --N_samples 1024 --perturb 1 --noise_std 0 --L_pos 10 \
--feature_size 256 --use_skip --seed 42 --batch_size 512 --chunk 262144 \