You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 21, 2023. It is now read-only.
Since FPN_RPN collects rois across images (batches) within a gpu, it's more reasonable for the collect size post_nms_topN relating to number of batches per gpu cfg.TRAIN.IMS_PER_BATCH. Such as something like
Note:
There is no cfg[cfg_key].FPN_RPN_POST_NMS_TOP_N, for cfg_key in {'TRAIN', ' TEST'}, defined in config.py already.
To define them, I just follow the discipline of cfg[cfg_key].RPN_POST_NMS_TOP_N: max number of post nms rois per batch.
Take e2e_mask_rcnn_R-50-FPN_1x.yaml for example, the config file should be changed like the below to keep original behavior.
Note: TRAIN.IMS_PER_BATH is 2, and on testing images-per-gpu is always 1.
Actual results
For now, when train with different TRAIN.IMS_PER_BATH than 2, post nms collect size of FPN_RPN is unchanged. I think that may not be a desired behavior.
IMS_PER_BATCH
FPN_RPN post nms collect size
Default
2
2000
Changed
1
2000
Changed
4
2000
Again, "FPN_RPN post nms collect size" is a total rois size for all the batches in one gpu.
Detailed steps to reproduce
None.
System information
Irrelevant.
Operating system: ?
Compiler version: ?
CUDA version: ?
cuDNN version: ?
NVIDIA driver version: ?
GPU models (for all devices if they are not all the same): ?
PYTHONPATH environment variable: ?
python --version output: ?
Anything else that seems relevant: ?
The text was updated successfully, but these errors were encountered:
I also find this issue recently. It is very misleading unless you dig into the code.
I don't understand why they mix proposals across images? There is no comment to point it out. I think many people consider these configs as per image and increase the batch size without changing them.
@roytseng-tw, @drcege: I agree this is a bug, it should be per image, not per batch. I'll work on a fix for it in the near future (which requires assessing any impact on model zoo models).
Expected results
https://github.com/facebookresearch/Detectron/blob/master/detectron/ops/collect_and_distribute_fpn_rpn_proposals.py#L73
Since FPN_RPN collects rois across images (batches) within a gpu, it's more reasonable for the collect size
post_nms_topN
relating to number of batches per gpucfg.TRAIN.IMS_PER_BATCH
. Such as something likeNote:
There is no
cfg[cfg_key].FPN_RPN_POST_NMS_TOP_N
, for cfg_key in {'TRAIN', ' TEST'}, defined inconfig.py
already.To define them, I just follow the discipline of
cfg[cfg_key].RPN_POST_NMS_TOP_N
: max number of post nms rois per batch.Take
e2e_mask_rcnn_R-50-FPN_1x.yaml
for example, the config file should be changed like the below to keep original behavior.Note:
TRAIN.IMS_PER_BATH
is 2, and on testing images-per-gpu is always 1.Actual results
For now, when train with different
TRAIN.IMS_PER_BATH
than 2, post nms collect size of FPN_RPN is unchanged. I think that may not be a desired behavior.Again, "FPN_RPN post nms collect size" is a total rois size for all the batches in one gpu.
Detailed steps to reproduce
None.
System information
Irrelevant.
PYTHONPATH
environment variable: ?python --version
output: ?The text was updated successfully, but these errors were encountered: