You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to reproduce your results for the STB Dataset. The models provided on the github page, perform very poorly when used to infer even 2D coordinates on the STB Test Dataset. Do you have a separate trained model for STB which you could provide?
At present, I am extracting a hand image cropped (size =150% of the bounding box) from the STB dataset, and performing the exact same image transformations as provided in the github repository here.
Also, can you provide the exact ids of vertices you used from the Mano mesh to interpolate the palm coordinate for data consistency (Mentioned in Section 8 of the paper)?
The text was updated successfully, but these errors were encountered:
Hi @boukhayma,
I am trying to reproduce your results for the STB Dataset. The models provided on the github page, perform very poorly when used to infer even 2D coordinates on the STB Test Dataset. Do you have a separate trained model for STB which you could provide?
At present, I am extracting a hand image cropped (size =150% of the bounding box) from the STB dataset, and performing the exact same image transformations as provided in the github repository here.
Also, can you provide the exact ids of vertices you used from the Mano mesh to interpolate the palm coordinate for data consistency (Mentioned in Section 8 of the paper)?
The text was updated successfully, but these errors were encountered: