- InferenceEngines is a C++ library designed for seamless integration of various backend engines for inference tasks.
- It supports multiple frameworks and libraries such as OpenCV DNN, TensorFlow, PyTorch (LibTorch), ONNX Runtime, TensorRT, and OpenVINO.
- The project aims to provide a unified interface for performing inference using these backends, allowing flexibility in choosing the most suitable backend based on performance or compatibility requirements.
- The library is currently mainly used as component of the Object Detection Inference Project
- C++17
- OpenCV
- glog
- OpenCV DNN module (4.11.0)
- ONNX Runtime (1.19.2 gpu package)
- LibTorch (2.0.1-cu118)
- TensorRT (10.0.7.23)
- OpenVino (2024.1)
- Libtensorflow (2.13) only inference on saved models, not graph
- CUDA (if you want to use GPU)
-
Clone the repository:
git clone https://github.com/inference_engines.git cd InferenceEngines
-
Create a build directory and navigate into it:
mkdir build cd build
-
Configure the build with CMake:
cmake ..
Optionally, you can specify the default backend by setting
-DDEFAULT_BACKEND=your_backend
during configuration.- Note: If the backend package is not installed on your system, set the path manually in the backend's CMake module (i.e. for Libtorch modify Libtorch.cmake or pass the argument
Torch_DIR
, for onnx-runtume modify ONNXRuntime.cmake or pass the argumentORT_VERSION
, same apply to other backend local packages)
- Note: If the backend package is not installed on your system, set the path manually in the backend's CMake module (i.e. for Libtorch modify Libtorch.cmake or pass the argument
-
Build the project:
cmake --build .
This will compile the project along with the selected backend(s).
To use the InferenceEngines library in your project, link against it and include necessary headers ( check the example here) :
target_link_libraries(your_project PRIVATE InferenceEngines)
target_include_directories(your_project PRIVATE path_to/InferenceEngines/include)
Ensure you have initialized and set up the selected backend(s) appropriately in your code using the provided interface headers.