You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great work!
During training, I want to utilize other geometric features (e.g. normal) except the coordinates (x, y, z). This is, the input shape will become (B, N, 6) instead of (B, N, 3). After doing this extension, there is some problems about the calculation of the jacobi matrix. I still only calculate gradient with respect to x, y, and z: A1 = self.mlp1[0].weight[:, :3, :] . Does it make sense?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi @StrivedTye, you could consider the ReLU-MLP as ReLU(Ax+b) (see eq. 8). Therefore, you are computing the partial derivative of the network of the input. If your input has size 6, you will get a gradient of size 6, and each dimension is in correspondence with your input. So, your understanding should be correct.
Hi Authors,
Thanks for your great work!
During training, I want to utilize other geometric features (e.g. normal) except the coordinates (x, y, z). This is, the input shape will become (B, N, 6) instead of (B, N, 3). After doing this extension, there is some problems about the calculation of the jacobi matrix. I still only calculate gradient with respect to x, y, and z:
A1 = self.mlp1[0].weight[:, :3, :]
. Does it make sense?Thanks!
The text was updated successfully, but these errors were encountered: