Skip to content
This repository has been archived by the owner on Nov 21, 2023. It is now read-only.

Question about post_nms_topN size in collect_and_distribute_fpn_rpn_proposals.py #459

Closed
roytseng-tw opened this issue May 30, 2018 · 2 comments

Comments

@roytseng-tw
Copy link
Contributor

Expected results

https://github.com/facebookresearch/Detectron/blob/master/detectron/ops/collect_and_distribute_fpn_rpn_proposals.py#L73

Since FPN_RPN collects rois across images (batches) within a gpu, it's more reasonable for the collect size post_nms_topN relating to number of batches per gpu cfg.TRAIN.IMS_PER_BATCH. Such as something like

post_nms_topN = cfg.TRAIN.IMS_PER_BATCH * cfg[cfg_key].FPN_RPN_POST_NMS_TOP_N

Note:
There is no cfg[cfg_key].FPN_RPN_POST_NMS_TOP_N, for cfg_key in {'TRAIN', ' TEST'}, defined in config.py already.
To define them, I just follow the discipline of cfg[cfg_key].RPN_POST_NMS_TOP_N: max number of post nms rois per batch.

Take e2e_mask_rcnn_R-50-FPN_1x.yaml for example, the config file should be changed like the below to keep original behavior.

(omit above)
TRAIN:
  SCALES: (800,)
  MAX_SIZE: 1333
  BATCH_SIZE_PER_IM: 512
  RPN_PRE_NMS_TOP_N: 2000  # Per FPN level
  FPN_RPN_PRE_NMS_TOP_N: 1000  # Per FPN level
TEST:
  SCALE: 800
  MAX_SIZE: 1333
  NMS: 0.5
  RPN_PRE_NMS_TOP_N: 1000  # Per FPN level
  RPN_POST_NMS_TOP_N: 1000
  FPN_RPN_PRE_NMS_TOP_N: 1000  # Per FPN level

Note: TRAIN.IMS_PER_BATH is 2, and on testing images-per-gpu is always 1.

Actual results

For now, when train with different TRAIN.IMS_PER_BATH than 2, post nms collect size of FPN_RPN is unchanged. I think that may not be a desired behavior.

IMS_PER_BATCH FPN_RPN post nms collect size
Default 2 2000
Changed 1 2000
Changed 4 2000

Again, "FPN_RPN post nms collect size" is a total rois size for all the batches in one gpu.

Detailed steps to reproduce

None.

System information

Irrelevant.

  • Operating system: ?
  • Compiler version: ?
  • CUDA version: ?
  • cuDNN version: ?
  • NVIDIA driver version: ?
  • GPU models (for all devices if they are not all the same): ?
  • PYTHONPATH environment variable: ?
  • python --version output: ?
  • Anything else that seems relevant: ?
@drcege
Copy link

drcege commented Jun 29, 2018

I also find this issue recently. It is very misleading unless you dig into the code.

I don't understand why they mix proposals across images? There is no comment to point it out. I think many people consider these configs as per image and increase the batch size without changing them.

@rbgirshick
Copy link
Contributor

@roytseng-tw, @drcege: I agree this is a bug, it should be per image, not per batch. I'll work on a fix for it in the near future (which requires assessing any impact on model zoo models).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants