You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The postprocessing step in RetinaNet is slow, and the whole inference time for RetinaNet is almost twice slower than Faster R-CNN as of today.
In particular,
does a for loop over the number of classes. This for loop can be parallelized by batching operations together over all classes, which should greatly improve the inference speed.
For reference, Detectron2 has sped up inference on RetinaNet a few times already, with latest optimization present in facebookresearch/detectron2@8999946 , and also batch inference over classes (and only does a for loop on the number of feature maps, which is much smaller than the number of COCO classes)
The text was updated successfully, but these errors were encountered:
🚀 Feature
The postprocessing step in RetinaNet is slow, and the whole inference time for RetinaNet is almost twice slower than Faster R-CNN as of today.
In particular,
vision/torchvision/models/detection/retinanet.py
Lines 442 to 471 in 5bb81c8
For reference, Detectron2 has sped up inference on RetinaNet a few times already, with latest optimization present in facebookresearch/detectron2@8999946 , and also batch inference over classes (and only does a for loop on the number of feature maps, which is much smaller than the number of COCO classes)
The text was updated successfully, but these errors were encountered: