Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about some details #3

Open
foralliance opened this issue Mar 20, 2019 · 4 comments
Open

about some details #3

foralliance opened this issue Mar 20, 2019 · 4 comments

Comments

@foralliance
Copy link

@KimSoybean HI
有几个疑问:

  1. 在分析BN作用时,是基于SSD+VGG16.能不能这么认为,只要是一个包含了BN层的网络,将其作为detector的底层网络时,该detector都可以train from scratch?

  2. 4.1节指出:remove the L2 Normalization.是不是因为BN层的存在可以代替这个L2 Normalization?

@KimSoybean
Copy link
Owner

@foralliance Hi!I think we should speak English because your questions may help the people in other countries.

  1. No. When you train the model with large input image size (e.g., 800x1300), the batch-size will reduce to 1-2 due to the limited GPU memory. Then the effect of BN will be constrained. If so, please replace BN with GN. GN does not care batch-size.

  2. This question is very complex. I can just answer your question that they are not relevant.

@foralliance
Copy link
Author

@KimSoybean
many many thanks!

@dby2017
Copy link

dby2017 commented Mar 27, 2019

Have you tried using Root-ResNet-18/34 on faster-rcnn?

@KimSoybean
Copy link
Owner

@dby2017 Hello, I haven't tried Root-Res on faster r-cnn. Maybe it is not very effective due to the large input resolution (small objects will be larger than before).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants