-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
shufflenet v2+ssd performance loss too much #10
Comments
Do I have to merge the BatchNorm layer and Scale layer to Convolution layer (modified the bias_term: to true and convert the caffemodel) before use this tool? |
This effect will be better |
@DL-85 hi,i follow the step to convert my mobilenet-ssd to int8 model,but the cost time is more and the performance is bad. Do you know the reason?It is very grateful if you give me any suggest. |
@titikid For this model(mobilenet_v1_ssd 300),I find that if you close the conv1 layer Int8 forward,and using Float32 forward inplace it,the mAP maybe increase.Please help me confirm it. |
@BUG1989 How do you know it?
|
@titikid We tried it out layer by layer ^_^ |
Recently i have been trying a new implement : kld + fine-tuning.It can reduce the precision loss(mAP loss < %).But i do the work in my spare time, so I don't know when I can finish it ( |
@BUG1989 yes.kld+fine-tuning should be better. |
@titikid mixed-precision-inference |
@BUG1989 is there some mistake here: "before converting your model files, delete the layer weight scale line in table file, and that layer will do the float32 inference" ???. i think table file is output of converting process. |
@BUG1989 i add 7=1 in conv1 in param file and the accuracy is significantly increase (mAP=66.35%). Cheer! |
@titikid Thank you very much for helping me with this experiment |
@DL-85 @titikid I test a new case,with output channel split,[k, k, cin, cout] cut into "cout" buckets.quantize all layer in mobilenet_v1_ssd,the mAP from 60.38 to 68.85(the mAP of FP32 is 70.49) |
@BUG1989 Do you have any document for new quantization method (base on number of group)? btw, i couldn't see any MobileNet_v1_dev.table in repository |
@titikid Hi,I have update those files,base on number of output channel number.
|
@BUG1989 i will try it. Thanks for your work! |
how did you get the quantization performance? @titikid |
what do you mean "add 7=1 in conv1 in param file" |
Thanks for the great tools!
When I use this tool to generate table from shufflenet v2+ssd prototxt and caffemodel, and use the caffe2ncnn tool to get int8 model, the performance loss too much, but mobilenet v1+ssd is fine with this tool. Why is it, any idea?
The text was updated successfully, but these errors were encountered: