libxop.so, which strives to include custom op of all backend inference engines, such as ONNXRuntime and libtorch.
Considering that it is almost impossible for the edge device side to use it to inference, and it is mainly used to support model transformation across DNN frameworks and graph optimization, so only the version of x86 architecture has been developed.
Useful, easy to use, indispensable
- cmake >= 3.15.5
- cuda 11.4
Recommended
- g++ >= 7.5 or 9.3
Recommended
- ubnutu >= 18.04 or 20.04
Recommended
- onnxruntime-linux-x64-1.8.1
git clone -b develop http://10.94.119.155/team/percep/porting/libraries/xop.git
docker pull 10.95.61.122:80/devops/dds_cross_compile:v3.3.1
docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY --privileged --network host -v /home/igs:/root/code --gpus all --name dds-conan-v3.3.1 6e5b2467c5be bash
# enter into docker env
cd xop
sh ./scripts/build_project.sh x86_64
none
xop is provided under the [Apache-2.0 license](LICENSE).