Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance and backbone network #17

Open
John1231983 opened this issue Jun 27, 2020 · 4 comments
Open

Performance and backbone network #17

John1231983 opened this issue Jun 27, 2020 · 4 comments

Comments

@John1231983
Copy link

John1231983 commented Jun 27, 2020

Thanks for sharing a great work. In your code, you provided some backbone network

support: ['ResNet_50', 'ResNet_101', 'ResNet_152', 'IR_50', 'IR_101', 'IR_152', 'IR_SE_50', 'IR_SE_101', 'IR_SE_152']

In the Model Zoo , the insightface provide LResNet100E-IR

Method LFW(%) CFP-FP(%) AgeDB-30(%) MegaFace(%)
Ours 99.77 98.27 98.28 98.47

This ís your result that trained from scratch with IR-101 using your setting.

Evaluation: LFW Acc: 0.9976666666666667, CFP_FP Acc: 0.983, AgeDB Acc: 0.9818333333333333, CPLFW Acc: 0.9301666666666666 CALFW Acc: 0.9608333333333332

And this is your report

Data LFW CFP-FP CPLFW AGEDB CALFW IJBB (TPR@FAR=1e-4) IJBC (TPR@FAR=1e-4)
Result 99.80 98.36 93.13 98.37 96.05 94.86 96.15

I have some questions after reading your code:

  1. Do you use augmentation (I found only RandomHorizontalFlip has been applied)? Insightface team used augmentation such as flip, ColorJitterAug, compress_aug... https://github.com/deepinsight/insightface/blob/3866cd77a6896c934b51ed39e9651b791d78bb57/recognition/image_iter.py#L207?

  2. I am using 4GPU with batch size of 700/each GPU. My performance is smaller than your report. Do you think number of GPU is the reason (you used 8 GPUs)?

  3. Does your IR_101 same with LResNet100E-IR in term of number of FLOP and params? I found that you save backbone and head seperately, while insighface saved them into one model? Any difference?

  4. Have you measure the inference speed of IR_101? I feel it too slow than mxnet

@HeshamAMH
Copy link

please John1231983, I test the model provided by the author on LFW but the result is 0.661 not 99.+. So, please I need a link to LFW and its pair file. I think the problem for me is LFW itself.

Thanks in advance

@quangtn266
Copy link

please John1231983, I test the model provided by the author on LFW but the result is 0.661 not 99.+. So, please I need a link to LFW and its pair file. I think the problem for me is LFW itself.

Thanks in advance

Hi, Did you solve your problem?

I also have the same problem and want to know any solution for that issue.

@marcohuber
Copy link

@quangtn266
I downloaded the (preprocessed) LFW data including the pairs file from here and achieved at least on LFW = 99.783 using the provided CurricularFace model. Still a bit worse, but I guess it's the minor differences in preprocessing. Maybe something is wrong with your preprocessing/pairs.

@michellerybak
Copy link

I had this problem when I didn't specify the checkpoint in the config file under BACKBONE_RESUME_ROOT

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants