You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 185, in <module>
infer_one_image(args.image_path)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 141, in infer_one_image
model = initialize_model(p, checkpoint_path)
File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 60, in initialize_model
model = get_model(p)
File "/content/Multi-Task-Transformer/TaskPrompter/utils/common_config.py", line 79, in get_model
backbone, backbone_channels = get_backbone(p)
File "/content/Multi-Task-Transformer/TaskPrompter/utils/common_config.py", line 22, in get_backbone
backbone = taskprompter_vit_large_patch16_384(p=p, pretrained=True, drop_path_rate=0.15, img_size=p.TRAIN.SCALE)
File "/content/Multi-Task-Transformer/TaskPrompter/models/transformers/taskprompter.py", line 676, in taskprompter_vit_large_patch16_384
model = _create_task_prompter('vit_large_patch16_384', pretrained=pretrained, **model_kwargs)
File "/content/Multi-Task-Transformer/TaskPrompter/models/transformers/taskprompter.py", line 661, in _create_task_prompter
model = build_model_with_cfg(
File "/usr/local/lib/python3.10/dist-packages/timm/models/_builder.py", line 385, in build_model_with_cfg
model = model_cls(**kwargs)
TypeError: TaskPrompter.__init__() got an unexpected keyword argument 'default_cfg'
Traceback (most recent call last):
File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 185, in <module>
infer_one_image(args.image_path)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 121, in infer_one_image
p = create_config(args.config_path, {'run_mode': 'infer'})
File "/content/Multi-Task-Transformer/TaskPrompter/utils/config.py", line 94, in create_config
with open(exp_file, 'r') as stream:
FileNotFoundError: [Errno 2] No such file or directory: './configs/pascal/pascal_vitLp16.yml'
Platform
Google colab with T4 runtime
The text was updated successfully, but these errors were encountered:
Hi, I'm not the author, but I encountered a similar error:
File "/workspace/container_test_folder/Multi-Task-Transformer/InvPT/models/transformers/vit.py", line 546, in _create_vision_transformer
model = build_model_with_cfg(
File "/opt/conda/lib/python3.10/site-packages/timm/models/helpers.py", line 537, in build_model_with_cfg
model = model_cls(**kwargs) if model_cfg is None else model_cls(cfg=model_cfg, **kwargs)
TypeError: VisionTransformer.__init__() got an unexpected keyword argument 'default_cfg'
The error can be resolved by simply modifying default_cfg at line 548 in "InvPT/models/transformers/vit.py" to pretrained_cfg. I hope this solution helps you :)
Can you help me with some other related issue, I am trying to detect 3D-bounding box over objects, how to do that ?
After 3d-bounding box, detect monocular depth of the detected objects.
Steps done:
.pth.tar
filesError
Trying other solution from closed issue #10
Error
Platform
Google colab with T4 runtime
The text was updated successfully, but these errors were encountered: