Deep Learning Benchmarking Suite (DLBS) is a collection of command line tools for running consistent and reproducible deep learning benchmark experiments on various hardware/software platforms. In particular, DLBS:
- Provides implementation of a number of neural networks in order to enforce apple-to-apple comparison across all supported frameworks. Models that are supported include various VGGs, ResNets, AlexNet and GoogleNet models. DLBS can support many more models via integration with third party benchmark projects such as Google's TF CNN Benchmarks or Tensor2Tensor.
- Benchmarks single node multi-GPU or CPU platforms. List of supported frameworks include various forks of Caffe (BVLC/NVIDIA/Intel), Caffe2, TensorFlow, MXNet, PyTorch. DLBS also supports NVIDIA's inference engine TensorRT for which DLBS provides highly optimized benchmark backend.
- Supports inference and training phases.
- Supports synthetic and real data.
- Supports bare metal and docker environments.
- Supports single/half/int8 precision and uses tensor cores with Volta GPUs.
- Is based on modular architecture enabling easy integration with other projects such Google's TF CNN Benchmarks and Tensor2Tensor or NVIDIA's NVCNN, NVCNN-HVD or similar.
- Supports
raw performance
metric (number of data samples per second like images/sec).
Deep Learning Benchmarking Suite was tested on various servers with Ubuntu / RedHat / CentOS operating systems with and without NVIDIA GPUs. We have a little success with running DLBS on top of AMD GPUs, but this is mostly untested. It may not work with Mac OS due to slightly different command line API of some of the tools we use (like, for instance, sed) - we will fix this in one of the next releases.
-
Install Docker and NVIDIA Docker for containerized benchmarks. Read here why we prefer to use docker and here for installing/troubleshooting tips. This is not required. DLBS can work with bare metal framework installations.
-
Clone Deep Learning Benchmarking Suite from GitHub
git clone https://github.com/HewlettPackard/dlcookbook-dlbs dlbs
-
The benchmarking suite mostly uses modules from standard python library (python 2.7). Optional dependencies that do not influence the benchmarking process are listed in
python/requirements.txt
. If they are not found, the code that uses it will be disabled. -
Build/pull docker images for containerized benchmarks or build/install host frameworks for bare metal benchmarks.
There are several ways to get Docker images. Read here about various options including images from NVIDIA GPU Cloud. We may not support the newest framework versions due to API change.
Our recommendation is to use docker images specified in default DLBS configuration. Most of them are docker images from NVIDIA GPU Cloud.
Assuming CUDA enabled GPU is present, execute the following commands to run simple experiment with ResNet50 model:
git clone https://github.com/HewlettPackard/dlcookbook-dlbs.git ./dlbs # Install benchmarking suite
cd ./dlbs && source ./scripts/environment.sh # Initialize host environment
python ./python/dlbs/experimenter.py help --frameworks # List supported DL frameworks
docker pull nvcr.io/nvidia/tensorflow:18.07-py3 # Pull TensorFlow docker image from NGC
python $experimenter run\ # Benchmark ...
-Pexp.framework='"nvtfcnn"'\ # TensorFlow framework
-Vexp.model='["resnet50", "alexnet_owt"]'\ # with ResNet50 and AlexNetOWT models
-Vexp.gpus='["0", "0,1", "0,1,2,3"]'\ # run on 1, 2 and 4 GPUs
-Pexp.dtype='"float16"' # use mixed-precision training
-Pexp.log_file='"${HOME}/dlbs/logs/${exp.id}.log"' \ # and write results to these files
python $logparser '${HOME}/dlbs/logs/*.log'\ # Parse log files and
--output_file '${HOME}/dlbs/results.json' # print and write summary to this file
python $reporter --summary_file '${HOME}/dlbs/results.json'\ # Parse summary file and build
--type 'weak-scaling'\ # weak scaling report
--target_variable 'results.time' # using batch time as performance metric
This configuration will run 6 benchmarks (2 models times 3 GPU configurations). DLBS can support multiple benchmark backends for Deep Learning frameworks. In this particular example DLBS uses a TensorFlow's nvtfcnn
benchmark backend from NVIDIA which is optimized for single/multi-GPU systems. The introduction section contains more information on what backends actually represent and what users should be using.
The introduction contains more examples of what DLBS can do.
We host documentation here.
- Why we created Benchmarking Suite
- GTC 2018 presentation / slides
- HPE Developer Portal
- HPE Deep Learning Performance Guide
Deep Learning Benchmarking Suite is licensed under Apache 2.0 license.
All contributors must include acceptance of the DCO (Developer Certificate of Origin). Please, read this document for more details.
- Natalia Vassilieva [email protected]
- Sergey Serebryakov [email protected]