A LibTorch inference implementation of yolov5. Both GPU and CPU are supported.
- Ubuntu 18.04
- CUDA 10.2
- LibTorch 1.7.0+
- TorchVision 0.8.1+
- OpenCV 3.4+
-
First, Setup the environment variables.
export TORCH_PATH=$(dirname $(python -c "import torch; print(torch.__file__)")) export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TORCH_PATH/lib/
-
Don't forget to compile
TorchVision
using the following scripts.git clone https://github.com/pytorch/vision.git cd vision git checkout release/0.8.0 # replace to `nightly` branch instead if you are using the nightly version mkdir build && cd build cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch make -j4 sudo make install
-
Generate
TorchScript
modelUnlike ultralytics's trace (
torch.jit.trace
) mechanism, I'm usingtorch.jit.script
to jit trace the YOLO models which containing the whole pre-processing (especially using theGeneralizedRCNNTransform
ops) and post-processing (especially with thenms
ops) procedures, so you don't need to rewrite manually the cpp codes of pre-processing and post-processing.git clone https://github.com/zhiqwang/yolov5-rt-stack.git cd yolov5-rt-stack python -m test.tracing.trace_model
-
Then compile the source code.
cd deployment mkdir build && cd build cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch make
-
Now, you can infer your own images.
./yolo_inference [--input_source YOUR_IMAGE_SOURCE_PATH] [--checkpoint ../../checkpoints/yolov5/yolov5s.torchscript.pt] [--labelmap ../../notebooks/assets/coco.names] [--output_dir ../../data-bin/output] [--gpu] # GPU switch, Set False as default