You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a couple of questions regarding the model's performance and application. First, how well does the model perform in hand-object interaction scenarios? Additionally, could you provide some guidance on setting up the pipeline for inference on custom datasets?
I appreciate any insights you can share.
The text was updated successfully, but these errors were encountered:
First, A2J-Transformer can be applied to hand-object interaction datasets like HO-3D. To write the dataloader, you can just follow Keypoint-Transformer(CVPR'22).
Second, to write the dataloader on custom datasets, please just give the model "input_img", and A2J-Transformer can output the 2.5D coordinates of the image. Thus the 2D and 3D(relative to the root joint) coordinates can be visualized.
Hi @ChanglongJiangGit ,
Thank you for your excellent work!
I have a couple of questions regarding the model's performance and application. First, how well does the model perform in hand-object interaction scenarios? Additionally, could you provide some guidance on setting up the pipeline for inference on custom datasets?
I appreciate any insights you can share.
The text was updated successfully, but these errors were encountered: