RegHEC is a registration-based hand-eye calibration technique using multi-view point clouds of arbitrary object. It tries to align multi-view point clouds of arbitrary object by estimating the hand-eye relation, thus both point clouds registration and hand-eye calibration are achieved simultaneously, making it favorable for robotic 3-D reconstruction task, where calibration and registration processes are normally conducted separately.
The cores of RegHEC are 2 novel algorithms. First, Bayesian Optimization based initial alignment(BO-IA) models the registration problem as a Gaussian Process over hand-eye relation and covariance function is modified (given in ExpSE3.cpp
) to be compatible with distance metric in 3-D motion space SE(3). It gives the coarse point clouds registration then hand over the the proper initial guess of hand-eye relation to an ICP variant with Anderson Accleration(AA-ICPv) for later fine registration and accurate calibration.
As a general solution, RegHEC is applicable for most 3-D vision guided task in both eye-in-hand and eye-to-hand scenario with no need for specialized calibration rig(e.g. calibration board) but arbitrary available object. This technique is verified feasible and effective with real robotic hand-eye system and varieties of arbitrary objects including cylinder, cone, sphere and simple plane, which can be quite challenging for correct point cloud registration and sensor motion estimation using existing method.
For more information, please refer to:
- S. Xing, F. Jing, and M. Tan. RegHEC: Hand-Eye Calibration via Simultaneous Multi-view Point Clouds Registration of Arbitrary Object. arXiv preprint arXiv:2304.14092.
If you find our code helpful or use it in your project, please consider citing:
@misc{Xing2023-RegHEC,
title={RegHEC: Hand-Eye Calibration via Simultaneous Multi-view Point Clouds Registration of Arbitrary Object},
author={Shiyu Xing and Fengshui Jing and Min Tan},
year={2023},
eprint={2304.14092},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
I work on Windows operation system with VS2019. This repository is a C++ solution developed with the following versions of external libraries:
Limbo 2.1
Limbo is an open source C++ library for Bayesian optimization, which relies on NLOpt for maximization of acquisition function. Both limbo 2.1 and precompiled NLopt 2.5.0 with MSVC 2019 are given above for your convenience. Please note that, limbo is currently mostly developed for GNU/Linux, thus we modified the system call in the sys.hpp(in limbo-release-2.1/src/limbo/tools) to make it compatible with Windows.
Besides, the ExpSE3.cpp
is already copied to limbo-release-2.1/src/limbo/kernel, so basically limbo-release-2.1
above is all you need to build under windows.
Sophus 1.0.0
Sophus is a c++ implementation of Lie groups commonly used for 2d and 3d geometric problems (i.e. for Computer Vision or Robotics applications). Sophus 1.0.0 is given above for your convenience.
PCL 1.11.1
PCL is very commonly used c++ library for point cloud processing. We suggest installing PCL-1.11.1-AllInOne-msvc2019-win64.exe with 3rd party libraries checked. As both Eigen and Boost are included, which are also dependencies of limbo. There are abundant instructions to get PCL ready, we do not detail here.
Clone or download this repo, open VS2019, then create a new console application. In solution explorer, add RegHEC.cpp
, GNsolver.cpp
and GNsolver.h
as existing items. In properties, set %your path%\sophus-1.0.0\include
, %your path%\limbo-release-2.1\src
and %your path%\nlopt-2.5.0\include
as include directories. Set %your path%\nlopt-2.5.0\lib
as library directories and nlopt.lib
as additional dependencies. Do not forget to configure the PCL-related settings and copy the nlopt.dll
to where the executable is then you are ready to run.
You are welcome to convert it to Linux, but I have not. Note that you need to switch system api in sys.hpp of limbo, use building tool and dependencies compatible with Linux if you do so.
Multi-view point clouds in .pcd
and corresponding robot poses in RobotPoses.dat
(pose of flange frame w.r.t robot base frame) where point clouds are captured.
Data used in the paper is given in Data folder. In our experiments, point clouds were captured from maximum 9 different viewpoints. Change the input directory std::string path = "./data/David";
to try different object. You can also try with your own data.
RobotPoses.dat
gives the robot poses in 6 dimensions. First 3 elements in each row are Euler angles for orientation and second 3 elements are positions.
The current version is rather static, some simple modifications are needed to test with number of viewpoints other than 9. We will make the solution more dynamic in the later commit.
Calibrated hand-eye relation and multi-view point clouds registration.
Registration of 9 David point clouds under robot base frame and calibration results given by RegHEC(eye in hand)
Registration of 9 Gripper point clouds under robot flange frame and calibration results given by RegHEC(eye to hand)