You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, @yifita. I find your great work recently.
But I am a little confused about the gradient of the regularization terms. In the paper, 4.2 Alternating normal and point update, you said the point and normal are updated by the gradient of the regularization terms. I find the code in
maybe related.
The Let pi denote a point in question and pk denote one point in its neighborhood. So the Lr and Lk are constant with know pi, pk.
How to calculate the gradient with respect to pk?
Thanks,
The text was updated successfully, but these errors were encountered:
Hi, @yifita. I find your great work recently.
But I am a little confused about the gradient of the regularization terms. In the paper, 4.2 Alternating normal and point update, you said the point and normal are updated by the gradient of the regularization terms. I find the code in
DSS/DSS/training/losses.py
Line 182 in d96260c
maybe related.
The Let pi denote a point in question and pk denote one point in its neighborhood. So the Lr and Lk are constant with know pi, pk.
How to calculate the gradient with respect to pk?
Thanks,
The text was updated successfully, but these errors were encountered: