-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why opacity compensation on the 3D filter? This makes small Gaussians non-opaque. #48
Comments
Hi, you are correct. The other solution is to just restrict a minimum size for each Gaussian. I tried that at some point but found small artifacts when rendering higher resolution images (maybe just a bug in this experiment, not sure). Implementing it as a 3D filter makes it consistent with the 2D filter as in this case the combined effect (3D filter with kernel size 0.2 and 2D mip-filter with kernel size 0.1) will be similar to using an EWA filter (with kernel size 0.3). On the other hand, the 2D filter could also be implemented as a 3D filter by choose the 3D kernel size based on the depth of the Gaussian. Thanks for the pointer to the opacity compensation in the 3DGS repo. The major differences is the kernel size. As we explained in our paper, the kernel size should be chosen to approximate a single pixel. See here for a comparison #18 (comment) and more results in our paper. |
I added TL;DR: the performance is basically unchanged but the number of Gaussians created is much higher. I am trying to examine the PLYs to see why. benchmark_nerf_synthetic_ours_mtmt: Multi-scale Training and Multi-scale Testing on the Mip-NeRF 360 dataset PSNR:
SSIM:
LPIPS:
Count:
benchmark_nerf_synthetic_ours_stmt: Single-scale Training and Multi-scale Testing on the Mip-NeRF 360 dataset PSNR:
SSIM:
LPIPS:
Count:
benchmark_360v2_ours: Multi-scale Training and Multi-scale Testing on the the Blender dataset PSNR:
SSIM:
LPIPS:
Count:
benchmark_360v2_ours_stmt: Single-scale Training and Multi-scale Testing on the the Blender dataset PSNR:
SSIM:
LPIPS:
Count:
|
@niujinshuchong Thanks you for pointing out that the 2D variance was also wrong in the 3DGS implementation! They probably mixed variance with standard deviation! (sqrt(0.1) = 0.3) |
Hi, Thanks for sharing the results. Are you using the latest codebase for the above experiments? The improvements over the paper results are come from the improved densification in our GOF project https://github.com/autonomousvision/gaussian-opacity-fields. To reproduce or to compare with the paper results, you need to use a previous commit. |
@niujinshuchong I ran the full benchmarks, see results above. |
The justification for the 3D filter in the paper comes from the fact that "a primitive smaller than 2Tˆ may result in aliasing artifacts during the splatting process, since its size is below twice the sampling interval."
In practice, this means that there is a minimum Gaussian scale (
filter_3D
in the code), and the real scales are computed from the scales parameters that way:However, the code actually implements it as a 3D Gaussian filter, and also applies opactity compensation in 3D:
This last part (opacity compensation), in my opinion, is wrong, because this prevents small Gaussians from being 100% opaque.
For example, a round Gaussian that would exactly have the "parameter" scale
filter_3D
, and thus the "real" scalesqrt(2)*filter_3D
, and a "parameter" opacity of 1, will have a "real" opacity of...35% !
The effect is similar on larger Gaussian: no Gaussian can be 100% opaque with this 3D opacity compensation.
Conclusion: The 3D filter should really be only about setting a minimum size, without adjusting the opacity, so that small Gaussians can be opaque.
Of course, I totally agree with opacity compensation on 2D Gaussians (which I proposed in Oct 2023 graphdeco-inria/gaussian-splatting#294 (comment))
The text was updated successfully, but these errors were encountered: