Skip to content

C++ application to perform computer vision tasks using Nvidia Triton Server for model inference

License

Notifications You must be signed in to change notification settings

olibartfast/computer-vision-triton-cpp-client

Repository files navigation

C++ Triton Client for Computer Vision Models

This C++ application enables machine learning tasks (e.g. object detection, classification, optical flow ...) using the Nvidia Triton Server. Triton manages multiple framework backends for streamlined model deployment.

Table of Contents

Supported (Tested) Models

Object Detection

Instance Segmentation

Classification

Optical Flow

Build Client Libraries

To build the client libraries, refer to the official Triton Inference Server client libraries.

Dependencies

Ensure the following dependencies are installed:

  1. Nvidia Triton Inference Server:
docker pull nvcr.io/nvidia/tritonserver:24.12-py3
  1. Triton client libraries: Tested on Release r24.12
  2. Protobuf and gRPC++: Versions compatible with Triton
  3. RapidJSON:
apt install rapidjson-dev
  1. libcurl:
apt install libcurl4-openssl-dev
  1. OpenCV 4: Tested version: 4.7.0

Build and Compile

  1. Set the environment variable TritonClientBuild_DIR or update the CMakeLists.txt with the path to your installed Triton client libraries.

  2. Create a build directory:

mkdir build
  1. Navigate to the build directory:
cd build
  1. Run CMake to configure the build:
cmake -DCMAKE_BUILD_TYPE=Release ..

Optional flags:

  • -DSHOW_FRAME: Enable to display processed frames after inference
  • -DWRITE_FRAME: Enable to write processed frames to disk
  1. Build the application:
cmake --build .

Tasks

Export Instructions

Other tasks are in TODO list.

Notes

Ensure the model export versions match those supported by your Triton release. Check Triton releases here.

Deploying Models

To deploy models, set up a model repository following the Triton Model Repository schema. The config.pbtxt file is optional unless you're using the OpenVino backend, implementing an Ensemble pipeline, or passing custom inference parameters.

Model Repository Structure

<model_repository>/
    <model_name>/
        config.pbtxt
        <model_version>/
            <model_binary>

To start Triton Server:

docker run --gpus=1 --rm \
  -p 8000:8000 -p 8001:8001 -p 8002:8002 \
  -v /full/path/to/model_repository:/models \
  nvcr.io/nvidia/tritonserver:<xx.yy>-py3 tritonserver \
  --model-repository=/models

Omit the --gpus flag if using the CPU version.

Running Inference

Command-Line Inference on Video or Image

./computer-vision-triton-cpp-client \
    --source=/path/to/source.format \
    --model_type=<model_type> \
    --model=<model_name_folder_on_triton> \
    --labelsFile=/path/to/labels/coco.names \
    --protocol=<http or grpc> \
    --serverAddress=<triton-ip> \
    --port=<8000 for http, 8001 for grpc> \

For dynamic input sizes:

    --input_sizes="c,h,w"

Debugging Tips

Check .vscode/launch.json for additional configuration examples

Placeholder Descriptions

  • /path/to/source.format: Path to the input video or image file, for optical flow you must pass two images as comma separated list
  • <model_type>: Model type (e.g., yolov5, yolov8, yolo11, yoloseg, torchvision-classifier, tensorflow-classifier, check below Model Type Parameters)
  • <model_name_folder_on_triton>: Name of the model folder on the Triton server
  • /path/to/labels/coco.names: Path to the label file (e.g., COCO labels)
  • <http or grpc>: Communication protocol (http or grpc)
  • <triton-ip>: IP address of your Triton server
  • <8000 for http, 8001 for grpc>: Port number
  • <batch or b >: Batch size, currently only 1 is supported
  • <input_sizes or -is>: Input sizes input for dynamic axes. Semi-colon separated list format: CHW;CHW;... (e.g., '3,224,224' for single input or '3,224,224;3,224,224' for two inputs, '3,640,640;2' for rtdetr/dfine models)

To view all available parameters, run:

./computer-vision-triton-cpp-client --help

Model Type Tag Parameters

Model Model Type Parameter
YOLOv5 yolov5
YOLOv6 yolov6
YOLOv7 yolov7
YOLOv8 yolov8
YOLOv9 yolov9
YOLOv10 yolov10
YOLO11 yolo11
RT-DETR rtdetr
RT-DETR Ultralytics rtdetrul
D-FINE dfine
Torchvision Classifier torchvision-classifier
Tensorflow Classifier tensorflow-classifier
YOLOv5 Segmentation yoloseg
YOLOv8 Segmentation yoloseg
YOLO11 Segmentation yoloseg
RAFT Optical Flow raft

Docker Support

For detailed instructions on installing Docker and the NVIDIA Container Toolkit, refer to the Docker Setup Document.

Build Image

docker build --rm -t computer-vision-triton-cpp-client .

Run Container

docker run --rm \
  -v /path/to/host/data:/app/data \
  computer-vision-triton-cpp-client \
  --network host \
  --source=<path_to_source_on_container> \
  --model_type=<model_type> \
  --model=<model_name_folder_on_triton> \
  --labelsFile=<path_to_labels_on_container> \
  --protocol=<http or grpc> \
  --serverAddress=<triton-ip> \
  --port=<8000 for http, 8001 for grpc>

Demo

Real-time inference test (GPU RTX 3060):

References

Feedback

Any feedback is greatly appreciated. If you have any suggestions, bug reports, or questions, don't hesitate to open an issue.

About

C++ application to perform computer vision tasks using Nvidia Triton Server for model inference

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published