-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poor performance on custom dataset #25
Comments
do you have any lidar pointcloud in your side view? |
Hi, it seems that the images are not captured by a pinhole camera. Our method may fail to reconstruct the scene if your images are not well rectified. |
no lidar in side view in the first scene, there is a front m1 lidar. @CuriousCat-7 |
Yeah, as far as I know, their method should perform unwell on dynamic objects when there is no lidar supervision. This is kind of a general conclusion with gaussian splatting based method. Try mono-depth's pesudo-pointcloud may alleviate this problem. |
Thanks for the great work~
I use 3 camera(front_far, front_left front_right), 50 frames, panda 128 lidar on my own dataset, the other config is same as default cofig.
the train and test psnr
{"split": "train", "iteration": 30000, "psnr": 31.93791029746072, "ssim": 0.9148447503123367, "lpips": 0.2528826928975289}
{"split": "test", "iteration": 30000, "psnr": 23.408180978563095, "ssim": 0.7495270156198077, "lpips": 0.3991368371579382}
the render result of test data
The reder image seem poor, and I found that the first few frames were rendered okay, but the later ones were much worse. I changed
fix_radius
to 10.0 and 15.0, but it seems not work. Could you please give me some advise? What parameters need to be adjusted?The text was updated successfully, but these errors were encountered: