Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about supervision for training on ClothesSeq data. #8

Open
GostInShell opened this issue Mar 9, 2022 · 0 comments
Open

Question about supervision for training on ClothesSeq data. #8

GostInShell opened this issue Mar 9, 2022 · 0 comments

Comments

@GostInShell
Copy link

Hi, thanks for releasing your code!

If i get the following codes right. In the first stage, the weights for both SMPL and clothed human meshes are predicted.
My question is how the gt weight for the clothed human (skin_bp) is calculated.

weight_pred = self.model_wgt(pts, body_enc_feat, pose_in)
weight_smpl = self.model_wgt(smpl_vert, body_enc_feat_smpl, pose_in)

w1 = self.loss_l1(weight_smpl, gt_skin)
w2 = self.loss_l1(weight_pred, skin_bp)

Moreover, here the clothed human in the canonical pose (can_pts_gt) is also used for supervision.

diff_can_bp = can_pt_bp - can_pts_gt
diff_can_bp = torch.sqrt(torch.sum(diff_can_bp*diff_can_bp, dim=2)).mean()

I can understand these variables for a SMPL mesh can be obtaiend. But how to obtain these parameters for a clothed human mesh?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant