You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why does build_prototypes.ipynb use model = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14') while inference in demo.py uses model_path="weights/trained/open-vocabulary/lvis/vitl_0069999.pth"? Should'nt the models be the same ?
Why do we have different weights file for COCO and LVIS? as per my understanding the Vit model remains the same which is DinoV2 even if the datasets changes, since there is no fine tuning.
Finally for a custom dataset if there is no domain gap the steps would be to 1. create prototypes, 2. Run the demo, and if there is a domain gap i guess we have to fine tune the RPN and leave the Vits part as it is. Would this be the way?
The text was updated successfully, but these errors were encountered:
Why does
build_prototypes.ipynb
usemodel = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14')
while inference indemo.py
usesmodel_path="weights/trained/open-vocabulary/lvis/vitl_0069999.pth"
? Should'nt the models be the same ?Why do we have different weights file for COCO and LVIS? as per my understanding the Vit model remains the same which is DinoV2 even if the datasets changes, since there is no fine tuning.
Finally for a custom dataset if there is no domain gap the steps would be to 1. create prototypes, 2. Run the demo, and if there is a domain gap i guess we have to fine tune the RPN and leave the Vits part as it is. Would this be the way?
The text was updated successfully, but these errors were encountered: