You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have implemented this work with Docker and without docker as well (to debug and to understand it).
My question is as follows:
From what I understand the quantitative evaluation was done considering the correspondences as the ground truth label.
In the YouTube video of the talk about this paper, it was said some sort of manual labeling was done for cross instance and cross configuration object categories.
If we consider the Single object within scene category, as here correspondences are generated during runtime, do you have any unit test to verify the accuracy of correspondences being generated?
The text was updated successfully, but these errors were encountered:
Hello,
I have implemented this work with Docker and without docker as well (to debug and to understand it).
My question is as follows:
From what I understand the quantitative evaluation was done considering the correspondences as the ground truth label.
In the YouTube video of the talk about this paper, it was said some sort of manual labeling was done for cross instance and cross configuration object categories.
If we consider the Single object within scene category, as here correspondences are generated during runtime, do you have any unit test to verify the accuracy of correspondences being generated?
The text was updated successfully, but these errors were encountered: