You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We run the mkl_dnn benchmark test in Docker container on the laptop Dell XPS 15 , and find that:
The batch size of training samples is limited by the memory (8G) of the laptop, up to 48, which is smaller than the minimum batch size of the benchmark test on server.
When batch size is too small (<=8), the training cost will yield nan. Maybe need to modify the test script to avoid such nan cost.
The text was updated successfully, but these errors were encountered:
I highly recommend expand the memory for benchmark, since 8G is even smaller than some GPU(12G memory).
And for some typologies which are very deep like resnet, we can only choose very small batchsize.
It can not show the best performance of MKL-DNN or MKLML.
When change batchsize to smaller, we should change the learning rate smaller too, since vgg do not have batch norm layer, it's very easy to nan
We run the mkl_dnn benchmark test in Docker container on the laptop Dell XPS 15 , and find that:
The batch size of training samples is limited by the memory (8G) of the laptop, up to 48, which is smaller than the minimum batch size of the benchmark test on server.
When batch size is too small (<=8), the training cost will yield
nan
. Maybe need to modify the test script to avoid suchnan
cost.The text was updated successfully, but these errors were encountered: