Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update alexnet training data #6878

Merged
merged 2 commits into from
Dec 25, 2017
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions benchmark/IntelOptimizedPaddle.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ On each machine, we will test and compare the performance of training on single

#### Training
Test on batch size 64, 128, 256 on Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Pay attetion that the speed below includes forward, backward and parameter update time. So we can not directly compare the data with the benchmark of caffe `time` [command](https://github.com/PaddlePaddle/Paddle/blob/develop/benchmark/caffe/image/run.sh#L9), which only contain forward and backward. The updating time of parameter would become very heavy when the weight size are large, especially on alexnet.

Input image size - 3 * 224 * 224, Time: images/second

Expand Down Expand Up @@ -55,6 +56,16 @@ Input image size - 3 * 224 * 224, Time: images/second

<img src="figs/googlenet-cpu-train.png" width="500">

- Alexnet

| BatchSize | 64 | 128 | 256 |
|--------------|--------| ------ | -------|
| OpenBLAS | 0.85 | 1.03 | 1.17 |
| MKLML | 71.26 | 106.94 | 155.18 |
| MKL-DNN     | 362.66 | 497.66 | 610.73 |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BatchSize 64 128 256
OpenBLAS 2.13 2.45 2.68
MKLML 66.37 105.60 144.04
MKL-DNN 399.00 498.94 626.53


chart TBD

#### Inference
Test on batch size 1, 2, 4, 8, 16 on Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
- VGG-19
Expand Down