Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results with no variation depthmap refinement #35

Open
eddienewton opened this issue Mar 21, 2019 · 10 comments
Open

Results with no variation depthmap refinement #35

eddienewton opened this issue Mar 21, 2019 · 10 comments

Comments

@eddienewton
Copy link

Hi,

RMVSNet appears to have improved the memory management on MVSNet. Good job!

I had a question regarding the point-cloud results without depth-map refinement (Figure 1 in your paper). I tried to reproduce your point-cloud results without refinement, and had a couple questions. From the views attached, you can see my results against the published point-clouds. The front views appear very similar. However, from the side view, the refined mesh is cleaner and sharper. My results have more noise.

I used settings of 1600x1200x256. I also set probability to 0.1. Is there anything else that could affect the noise level? Or do my results correspond to what you saw?

Thanks!

front_view

side_view

@tejaskhot
Copy link

@eddienewton Can you share how you managed to run with 1600x1200x256? What GPU did you use? I was unable to fit the full resolution in memory even with a 16GB RAM. Are you using half precision or some hack you could share?

@eddienewton
Copy link
Author

@tejaskhot I believe I’m using a P100 with 16GB ram. Are you sure you’re running R-MVSNet and not the older MVSNet? The older version will have problems with memory. I’m using the standard R-MVSNet settings.

@YoYo000
Copy link
Owner

YoYo000 commented Mar 22, 2019

@eddienewton The variational refinement can significantly reduce the noise level.

Another thing is the depth map fusion. In the provided code I use Fusibile for fusion implementation. However, recently I found it produces higher level of noise than the fusion method proposed in the paper.

@YoYo000
Copy link
Owner

YoYo000 commented Mar 22, 2019

@eddienewton Can you share how you managed to run with 1600x1200x256? What GPU did you use? I was unable to fit the full resolution in memory even with a 16GB RAM. Are you using half precision or some hack you could share?

I used to run MVSNet on google cloud ml engine using a P100 GPU

@eddienewton
Copy link
Author

@YoYo000 thanks for the clarification and makes sense. I wonder if some of the single-image depth refinement algorithms might mimic your refinement technique?

For your depth refinement, you’re getting about 7 seconds for a c++ and Cuda implementation? Do you think this can be improved?

Regarding fusion noise, I believe adding the depthmap normals to dephfusion would improve the noise. I did a cross product of the left-right and top-bottom 3d points to calculate the correct normals. It seemed to help a bit.

@YoYo000
Copy link
Owner

YoYo000 commented Mar 24, 2019

@eddienewton I think single-image depth refinement can help reduce the noise, however, eventually it looks something like "smoothing" on the depth map. Personally I believe more on the photo-consistency based multi-view refinement as it is how we compute the depth value in MVS reconstruction.

The implementation can be further improved. After carefully deal with the cpu/gpu IO I think the algorithm can be several times faster? I will try this idea later.

Adding normal to fusion sounds like an easy but effective implementation :) Could you please share some of you results? Maybe we can add this to the Fusibile fusion.

@eddienewton
Copy link
Author

@YoYo000 regarding fusible, I'll post my code this upcoming week. Basically, just going to update the depthfusion.py code to create the normals.

Regarding variation refinement, I was trying to test and implement your refinement and had a couple questions.

(1) For each iteration, are you doing gradient descent on each pixel depth independently? Or are you using gradient descent to derive a global linear equation and solving depth that way?

(2) Is your initial gradient descent gain set to 10.0? Or was this a typo?

Thanks!

@YoYo000
Copy link
Owner

YoYo000 commented Apr 10, 2019

@eddienewton we solve the pixelwise depth gradient independently and the gain is set to 10.0.

@eddienewton
Copy link
Author

@YoYo000 Thanks for the answer. I might be doing the refinement wrong. In my code based off the paper, a gain of 10 is unstable.

Was there any data normalization you did to get the refinement to work?

If I ignore the E_photo, minimizing the E_smooth term should be stable. For a given pixel, gradient descent becomes:

d = d - gain * sum (2 * w * (d - di));

With 'w' varying between 0 and 1 (closer to 1 in my case), wouldn't a gain of 10 cause overshoot? Or are you doing the gradient descent on the matrix level?

Thanks!

@tatsy
Copy link

tatsy commented Nov 3, 2020

I agree with @eddienewton and the default step size lambda(0) = 10.0 seems to be large and results in unstable optimization as I implemented the variational depth refinement by following the R-MVSNet paper. Also, in terms of ordinary gradient descent, I think the default step size of 10.0 could be rather large (I think it is typically 0.1, 0.5 or at least less than 1.0 in general).

Can you give us an insight for how you determine the default step size to be 10.0?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants