Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ds model calibration error is large #2

Open
Mediumcore opened this issue Feb 28, 2022 · 23 comments
Open

ds model calibration error is large #2

Mediumcore opened this issue Feb 28, 2022 · 23 comments

Comments

@Mediumcore
Copy link

Hello,thanks for great job. I have repeated your evaluation_dataset,it's very good.
But when i use my own cameras on similar rigs back-to-back four 220 fisheye cameras,I can't repeat your great performance.
I think it's due to bad calibration,I follow https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Calibration.md to calibration ds model. Bellow are the results:
Current error: 259499 num_points 130805 mean_error 1.98386 reprojection_error 151157 mean reprojection 1.15559 opt_time 16ms. Optimization Converged !!
I've tried my times,have checked all Apilgrid sizes and so on, but can't perform good calibration.
Would you give me some advice? I''ll be appreciated.Thanks for your great job.

@ameuleman
Copy link
Collaborator

Thank you for your interest.
There seems to be a problem with calibration indeed. I am not sure what could be the reason for this. Here is what I obtain with default parameters:
Current error: 30604.5 num_points 109815 mean_error 0.278692 reprojection_error 42211.8 mean reprojection 0.38439

@Mediumcore
Copy link
Author

Thanks for your reply.
May I ask what method you are using for calibration , basalt_calibrate or klibr, april grid or chessboard?
Can I seperate four cameras' intrinsic and extrinsic calibrations?
My fisheye cameras' resolutions are very low, 1280*800. And I think this low resolutions will inevitably make performance worse.

@ameuleman
Copy link
Collaborator

I use basalt_calibrate with 6x6 april grid and joint intrinsic and extrinsic calibration. The input images for calibration are 1216x1216 pixels.

@Mediumcore
Copy link
Author

Thanks very much.

@Mediumcore
Copy link
Author

Hello bro, I wander the calibration result from basalt_calibrate config.json can directly be used? I see the panorama is stiched well ,but the depth panorama is bad. The calibration result from basalt_calibrate can be used directly? or need to manually modify something?

@ameuleman
Copy link
Collaborator

Hello,
Large calibration errors such as the one you have is likely to make the cost volume computation unreliable, hurting the estimated distance quality significantly. The matching can also suffer from imaging artifacts. Since the cost function has been designed for efficiency, it tends not to be robust against vignetting or differences in white balance and exposure between cameras. Another thing that can affect quality is improper masks. Is masks are not defined, the algorithm will attempt to use sensor areas that are not adequate, like occlusion from the other cameras or the outskirt of the sensor. A last possible issue that come to mind is the possible lack of synchronisation between cameras for dynamic scenes.

@ynma-hanvo
Copy link

hi, seems like basalt_calibrate does not support calibrate more than 2 cameras, did you calibrate 3 pairs in a system of 4 cameras?

@ameuleman
Copy link
Collaborator

ameuleman commented Sep 16, 2022

Hello,
https://github.com/VladyslavUsenko/basalt-mirror/blob/master/doc/Calibration.md has an exemple with four cameras (the Kalibr dataset)

@ynma-hanvo
Copy link

ynma-hanvo commented Sep 27, 2022

Hello, https://github.com/VladyslavUsenko/basalt-mirror/blob/master/doc/Calibration.md has an exemple with four cameras (the Kalibr dataset)

thanks for clarify. and just to make sure , in the provided resouce , the extrinsics in the calibration.json are all realative to the first camera ?

@ameuleman
Copy link
Collaborator

Yes, if you check the file, you will notice that the first camera's pose is identity. However, the depth estimation code stitches a panorama between the reference cameras.

@rbarnardo
Copy link

Hi, a quick question on calibration, did you use the bag format or the euroc format? And how were the folders with the data formatted to use all 4 cameras in basalt_calibrate? Did you use ros to do this?
Thanks very much

@ameuleman
Copy link
Collaborator

Hi,
We use kalibr's bagcreater script to create a bag file from images.

@rbarnardo
Copy link

Hi again, rather than start a whole new issue I thought I would just post here.
I am attempting to use this code with 4 fisheye cameras in a different configuration (side by side, rather than at 90-degree angles). I was expecting the output to not be as accurate, but currently, the panorama stitching has a ghosting effect and the distance map is very far off. Is this possible to do with an alternative camera configuration? If so is there anything I would need to change in the code to do so?
Thanks very much!

@ameuleman
Copy link
Collaborator

Hi,
I would not expect this to be an issue, the main dataset we render has the same relative layout as OmniHouse (four cameras on the same plane if I recall correctly).
There could be other issues. Are calibration results sensible? Is the scale of the rig different? --min_dist might not be appropriate depending on the rig.

@rbarnardo
Copy link

Current error: 7818.4 num_points 11678 mean_error 0.669498 reprojection_error 5309.73 mean reprojection 0.454678 opt_time 6ms.
These are the calibration results. The scale is similar I believe, I will try tweaking --min_dist and see if this helps.
Thanks very much!

@ameuleman
Copy link
Collaborator

And what is the calibration you get?

@rbarnardo
Copy link

The calibration has an error:
Current error: 9321.57 num_points 19256 mean_error 0.484086 reprojection_error 8648.64 mean reprojection 0.44914 opt_time 3ms.
The first part of the calibration .json looks as follows,
{
"value0": {
"T_imu_cam": [
{
"px": 0.0,
"py": 0.0,
"pz": 0.0,
"qx": 0.0,
"qy": 0.0,
"qz": 0.0,
"qw": 1.0
},
{
"px": 0.0011283775224488562,
"py": -0.009325858669332464,
"pz": -0.030513329482196224,
"qx": 0.019144730363173595,
"qy": 0.014464803637306076,
"qz": 0.00009541382868177429,
"qw": 0.9997120783761978
},
{
"px": 0.049477859252123609,
"py": 0.13854548867801184,
"pz": 0.003122071182463934,
"qx": -0.00471225662771139,
"qy": -0.009381521500672173,
"qz": -0.0023731417113784555,
"qw": 0.9999420732673593
},
{
"px": -0.04494123790109955,
"py": 0.12652640113918524,
"pz": -0.028362482823071676,
"qx": 0.029948330785623178,
"qy": 0.013109201052339535,
"qz": 0.0005221546141103406,
"qw": 0.9994653439141766
}
],
"intrinsics": [
{
"camera_type": "ds",
"intrinsics": {
"fx": 141.8308555112321,
"fy": 141.76473364351467,
"cx": 320.18691694447429,
"cy": 317.96202305068689,
"xi": -0.2934341520830758,
"alpha": 0.5726533480834829
}
},
{
"camera_type": "ds",
"intrinsics": {
"fx": 125.25734813634998,
"fy": 125.55154978494309,
"cx": 318.10899584184616,
"cy": 321.3068737784978,
"xi": -0.36019286169848727,
"alpha": 0.5288302881071429
}
},
{
"camera_type": "ds",
"intrinsics": {
"fx": 136.0518226838299,
"fy": 136.11625901149189,
"cx": 319.01784163426296,
"cy": 319.8666213694254,
"xi": -0.31542911117829089,
"alpha": 0.5722374558169866
}
},
{
"camera_type": "ds",
"intrinsics": {
"fx": 364.0354387381851,
"fy": 365.13742629395656,
"cx": 321.37156195555436,
"cy": 319.3883988573107,
"xi": 0.8575528258392979,
"alpha": 0.78659073358256
}
}
],
"resolution": [
[
640,
640
],
[
640,
640
],
[
640,
640
],
[
640,
640
]
],
"calib_accel_bias": [
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
"calib_gyro_bias": [
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0
],

And then the output of the panorama looks like this
rgb_0
The distance map looks just as messy.
Please let me know if you have any suggestions of where to go from here, Thank you so much!

@ameuleman
Copy link
Collaborator

The rotations (qx, qy, qz, qw) seem to indicate that all cameras are facing the same direction. Is it the case?

@rbarnardo
Copy link

Ah, this is not the case, cam0 and cam2 point in one direction and cam1 and cam3 point in the opposite direction, how would I fix this?

@ameuleman
Copy link
Collaborator

There might not be enough markers captured in the overlapping region of the front and back cameras

@rbarnardo
Copy link

I see, I will try again with calibration, thanks for the help!

@rbarnardo
Copy link

Thanks so much for this help with this. I noticed in the research paper, that you compared this method to the Omnimvs and Crownconv methods. If I may ask here, how did you get these methods working on a camera configuration that only had vertical displacement, rather than horizontal displacement? Thank you very much

@ameuleman
Copy link
Collaborator

Hi,
Our rig has both vertical and horizontal displacement (see the calibration file).
Both CrownConv and OmniMVS can work with non-planar rigs and exploit vertical and horizontal displacements. In Table 3 of the main paper, we use a planar rig to match OmniMVS and CrownConv's training layout and we show in Table 3 of the supplemental document that all methods benefit from the additional vertical baseline. We also observed that CrownConv seems more resilient to discrepancies between training and testing rigs.
BTW, please do not hesitate to open new issues for questions on different topics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants