Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Index out of range for pretrained model #4

Open
Prudhvinik1 opened this issue Jun 10, 2019 · 3 comments
Open

Index out of range for pretrained model #4

Prudhvinik1 opened this issue Jun 10, 2019 · 3 comments

Comments

@Prudhvinik1
Copy link

loading annotations into memory...
Done (t=0.02s)
creating index...
index created!
INFO json_dataset_rel.py: 395: Loading cached gt_roidb from /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/data/cache/vrd_val_rel_gt_roidb.pkl
INFO subprocess_rel.py: 88: rel_detection range command 0: python3 /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/./tools/test_net_rel.py --range 0 250 --cfg Outputs/vrd_VGG16_COCO_pretrained/rel_detection_range_config.yaml --set TEST.DATASETS '("vrd_val",)' --do_val --output_dir Outputs/vrd_VGG16_COCO_pretrained --load_ckpt trained_models/vrd_VGG16_COCO_pretrained/model_step7559.pth
INFO subprocess_rel.py: 88: rel_detection range command 1: python3 /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/./tools/test_net_rel.py --range 250 500 --cfg Outputs/vrd_VGG16_COCO_pretrained/rel_detection_range_config.yaml --set TEST.DATASETS '("vrd_val",)' --do_val --output_dir Outputs/vrd_VGG16_COCO_pretrained --load_ckpt trained_models/vrd_VGG16_COCO_pretrained/model_step7559.pth
INFO subprocess_rel.py: 88: rel_detection range command 2: python3 /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/./tools/test_net_rel.py --range 500 750 --cfg Outputs/vrd_VGG16_COCO_pretrained/rel_detection_range_config.yaml --set TEST.DATASETS '("vrd_val",)' --do_val --output_dir Outputs/vrd_VGG16_COCO_pretrained --load_ckpt trained_models/vrd_VGG16_COCO_pretrained/model_step7559.pth
INFO subprocess_rel.py: 88: rel_detection range command 3: python3 /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/./tools/test_net_rel.py --range 750 1000 --cfg Outputs/vrd_VGG16_COCO_pretrained/rel_detection_range_config.yaml --set TEST.DATASETS '("vrd_val",)' --do_val --output_dir Outputs/vrd_VGG16_COCO_pretrained --load_ckpt trained_models/vrd_VGG16_COCO_pretrained/model_step7559.pth
Traceback (most recent call last):
File "./tools/test_net_rel.py", line 175, in
check_expected_results=True)
File "/home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/lib/core/test_engine_rel.py", line 121, in run_inference
all_results = result_getter()
File "/home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/lib/core/test_engine_rel.py", line 101, in result_getter
multi_gpu=multi_gpu_testing
File "/home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/lib/core/test_engine_rel.py", line 140, in test_net_on_dataset
args, dataset_name, proposal_file, num_images, output_dir
File "/home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/lib/core/test_engine_rel.py", line 187, in multi_gpu_test_net_on_dataset
args.load_ckpt, args.load_detectron, opts
File "/home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/lib/utils_rel/subprocess_rel.py", line 69, in process_in_parallel
start = subinds[i][0]
IndexError: list index out of range

Could you help me fix this?

@jz462
Copy link
Contributor

jz462 commented Jul 24, 2019

Hi @Prudhvinik1,

I think this is probably an issue of the environment that you are using. I have seen this before when someone else in a company tries to run the same code in their system, but somehow this issue persists. Unfortunately I don't know how to tackle it, but if you have already done you are more than welcome to share it here. Thanks!

Ji

@azadef
Copy link

azadef commented Nov 13, 2019

In my machine gpu_inds was [0,1,2,3,4,5,6,7] even though I have only 2 gpus. Setting gpu_inds = [0,1] manually fixed the issue for me.

@shabnamsadegh
Copy link

It happened for me too when the number of images (for which I want to run the inference) is less than the number of GPUs. The code tries to divide the tasks between all GPUs. This is when this problem arises.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants