GCNv2 is a high-throughput variant of the Geometric Correspondence Network for performing RGB-D SLAM online on embedded platforms. We trained the binary descriptor in the same format with ORB (32 bytes) for the convenience of integration. In this implementation, we evaluate the motion estimation using a system built on top the [ORB-SLAM2], (https://github.com/raulmur/ORB_SLAM2). Thanks to the robustness of ORB-SLAM2, our system is able to achive reliable tracking perfomance on our drone platform in real-time.
Online running performance with ORB and GCNv2 features:
ORB:
GCNv2:
- GCNv2: Efficient Correspondence Prediction for Real-Time SLAM, J. Tang, L. Ericson, J. Folkesson and P. Jensfelt, in arXiv:1902.11046, 2019
- Geometric Correspondence Network for Camera Motion Estimation, J. Tang, J. Folkesson and P. Jensfelt, RA-L and ICRA 2018
We use the new thread and chrono functionalities of C++11.
We use Pytorch C++ api(libtorch) for deloying the GCNv2. The libtorch can be built as follows:
git clone --recursive -b v1.0.1 https://github.com/pytorch/pytorch
cd pytorch && mkdir build && cd build
python ../tools/build_libtorch.py
The built libtorch library is located at pytorch/torch/lib/tmp_install/
in default.
Update: Have added support for master branch of pytorch or version larger than 1.0.1. For newer version, set TORCH_PATH
to pytorch/torch/share/cmake/Torch
Required at least 1.0.1. Lower version of pytorch has cuDNN linking issue:pytorch/pytorch#14033 (comment).
Plese avoid using the pre-built version of libtorch since it will cause linking errors (due to CXX11 ABI issue).
We use Pangolin for visualization and user interface. Dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin.
We use OpenCV to manipulate images and features. Dowload and install instructions can be found at: http://opencv.org.
Required at least 2.4.3. Tested with OpenCV 2.4.11 and OpenCV 3.2.
Required by g2o (see below). Download and install instructions can be found at: http://eigen.tuxfamily.org.
Required at least 3.1.0.
We use modified versions of the DBoW2 library to perform place recognition and g2o library to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the Thirdparty folder.
Clone the code
git clone https://github.com/jiexiong2016/GCNv2_SLAM.git
Then build the project
cd GCNv2_SLAM
./build.sh
Make sure to edit build.sh
pointing to your local libtorch installation. Edit run.sh
to check out how to run with GCNv2 or vanilla ORB. Check the Network.md
for the network structure and link for trained models.
Update Set "FULL_RESOLUTION=1" and use "gcn2_640x480.pt" to test with image resolution "640x480" intead. The input image size should be consitent with the model to be used.