Skip to content

Commit

Permalink
Merge Master, update optimizer (#151)
Browse files Browse the repository at this point in the history
* [Feature] add yolox ncnn (#29)

* add yolox ncnn

* add ncnn android performance of yolox

* add ut

* fix lint

* fix None bugs for ncnn

* test codecov

* test codecov

* add device

* fix yapf

* remove if-else for img shape

* use channelshuffle optimize

* change benchmark after channelshuffle

* fix yapf

* fix yapf

* fuse continuous reshape

* fix static shape deploy

* fix code

* drop pad

* only static shape

* fix static

* fix docstring

* Added mask overlay to output image, changed fprintf info messages to … (#55)

* Added mask overlay to output image, changed fprintf info messages to stdout

* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds

* clang-format

* Support UNet in mmseg (#77)

* Repeatdataset in train has no CLASSES & PALETTE

* update result for unet

* update docstring for mmdet

* remove ppl for unet in docs

* fix ort wrap about input type (#81)

* Fix memleak (#86)

* delete []

* fix build error when enble MMDEPLOY_ACTIVE_LEVEL

* fix lint

* [Doc] Nano benchmark and tutorial (#71)

* add cls benchmark

* add nano zh-cn benchmark and en tutorial

* add device row

* add doc path to index.rst

* fix typo

* [Fix] fix missing deploy_core (#80)

* fix missing deploy_core

* mv flag to demo

* target link

* [Docs] Fix links in Chinese doc (#84)

* Fix docs in Chinese link

* Fix links

* Delete symbolic link and add links to html

* delete files

* Fix link

* [Feature] Add docker files (#67)

* add gpu and cpu dockerfile

* fix lint

* fix cpu docker and remove redundant

* use pip instead

* add build arg and readme

* fix grammar

* update readme

* add chinese doc for dockerfile and add docker build to build.md

* grammar

* refine dockerfiles

* add FAQs

* update Dpplcv_DIR for SDK building

* remove mmcls

* add sdk demos

* fix typo and lint

* update FAQs

* [Fix]fix check_env (#101)

* fix check_env

* update

* Replace convert_syncbatchnorm in mmseg (#93)

* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv

* change logger

* [Doc] Update FAQ for TensorRT (#96)

* update FAQ

* comment

* [Docs]: Update doc for openvino installation (#102)

* fix docs

* fix docs

* fix docs

* fix mmcv version

* fix docs

* rm blank line

* simplify non batch nms (#99)

* [Enhacement] Allow test.py to save evaluation results (#108)

* Add log file

* Delete debug code

* Rename logger

* resolve comments

* [Enhancement] Support mmocr v0.4+ (#115)

* support mmocr v0.4+

* 0.4.0 -> 0.4.1

* fix onnxruntime wrapper for gpu inference (#123)

* fix ncnn wrapper for ort-gpu

* resolve comment

* fix lint

* Fix typo (#132)

* lock mmcls version (#131)

* [Enhancement] upgrade isort in pre-commit config (#141)

* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87

* fix lint

* remove .isort.cfg and put its known_third_party to setup.cfg

* Fix ci for mmocr (#144)

* fix mmocr unittests

* remove useless

* lock mmdet maximum version to 2.20

* pip install -U numpy

* Fix capture_output (#125)

Co-authored-by: hanrui1sensetime <[email protected]>
Co-authored-by: Johannes L <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: VVsssssk <[email protected]>
Co-authored-by: lvhan028 <[email protected]>
Co-authored-by: AllentDan <[email protected]>
Co-authored-by: Yifan Zhou <[email protected]>
Co-authored-by: 杨培文 (Yang Peiwen) <[email protected]>
Co-authored-by: Semyon Bevzyuk <[email protected]>
  • Loading branch information
10 people authored Feb 11, 2022
1 parent 5ae4609 commit a25f360
Show file tree
Hide file tree
Showing 88 changed files with 1,490 additions and 394 deletions.
3 changes: 3 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ jobs:
- name: Install unittest dependencies
run: |
pip install -r requirements.txt
pip install -U numpy
- name: Build and install
run: rm -rf .eggs && pip install -e .
- name: Run unittests and generate coverage report
Expand Down Expand Up @@ -85,6 +86,7 @@ jobs:
python -V
python -m pip install mmcv-full==${{matrix.mmcv}} -f https://download.openmmlab.com/mmcv/dist/cu102/${{matrix.torch_version}}/index.html
python -m pip install -r requirements.txt
python -m pip install -U numpy
- name: Build and install
run: |
rm -rf .eggs && python -m pip install -e .
Expand Down Expand Up @@ -128,6 +130,7 @@ jobs:
python -V
python -m pip install mmcv-full==${{matrix.mmcv}} -f https://download.openmmlab.com/mmcv/dist/cu111/${{matrix.torch_version}}/index.html
python -m pip install -r requirements.txt
python -m pip install -U numpy
- name: Build and install
run: |
rm -rf .eggs && python -m pip install -e .
Expand Down
2 changes: 0 additions & 2 deletions .isort.cfg

This file was deleted.

8 changes: 2 additions & 6 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,8 @@ repos:
rev: 4.0.1
hooks:
- id: flake8
- repo: https://github.com/asottile/seed-isort-config
rev: v2.2.0
hooks:
- id: seed-isort-config
- repo: https://github.com/timothycrosley/isort
rev: 4.3.21
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
- id: isort
- repo: https://github.com/pre-commit/mirrors-yapf
Expand Down
4 changes: 4 additions & 0 deletions configs/mmdet/detection/single-stage_ncnn_static-416x416.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
_base_ = ['../_base_/base_static.py', '../../_base_/backends/ncnn.py']

codebase_config = dict(model_type='ncnn_end2end')
onnx_config = dict(output_names=['detection_output'], input_shape=[416, 416])
5 changes: 5 additions & 0 deletions configs/mmseg/segmentation_pplnn_static-512x1024.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
_base_ = ['./segmentation_static.py', '../_base_/backends/pplnn.py']

onnx_config = dict(input_shape=[1024, 512])

backend_config = dict(model_inputs=dict(opt_shape=[1, 3, 512, 1024]))
2 changes: 1 addition & 1 deletion csrc/apis/c/detector.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ MM_SDK_API void mmdeploy_detector_release_result(mm_detect_t* results, const int
for (int i = 0; i < count; ++i) {
for (int j = 0; j < result_count[i]; ++j, ++result_ptr) {
if (result_ptr->mask) {
delete result_ptr->mask->data;
delete[] result_ptr->mask->data;
delete result_ptr->mask;
}
}
Expand Down
1 change: 1 addition & 0 deletions csrc/backend_ops/torchscript/bind.cpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
// Copyright (c) OpenMMLab. All rights reserved.
#include "torch/script.h"

TORCH_LIBRARY(mmdeploy, m) {
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
// Copyright (c) OpenMMLab. All rights reserved.
#include "modulated_deform_conv/modulated_deform_conv_cpu.h"

#include "torch/script.h"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
// Copyright (c) OpenMMLab. All rights reserved.
#include "c10/cuda/CUDAStream.h"
#include "modulated_deform_conv/modulated_deform_conv_cuda.cuh"
#include "torch/script.h"
Expand Down
23 changes: 23 additions & 0 deletions csrc/backend_ops/torchscript/optimizer/optimizer.cpp
Original file line number Diff line number Diff line change
@@ -1,7 +1,15 @@
// Copyright (c) OpenMMLab. All rights reserved.
#include "optimizer.h"

#include <torch/csrc/jit/passes/canonicalize_graph_fuser_ops.h>
#include <torch/csrc/jit/passes/common_subexpression_elimination.h>
#include <torch/csrc/jit/passes/constant_pooling.h>
#include <torch/csrc/jit/passes/constant_propagation.h>
#include <torch/csrc/jit/passes/dead_code_elimination.h>
#include <torch/csrc/jit/passes/freeze_module.h>
#include <torch/csrc/jit/passes/frozen_graph_optimizations.h>
#include <torch/csrc/jit/passes/peephole.h>
#include <torch/csrc/jit/passes/remove_expands.h>

#if TORCH_VERSION_MINOR >= 9
#include <torch/csrc/jit/passes/frozen_conv_add_relu_fusion.h>
Expand All @@ -10,6 +18,15 @@
#endif

namespace mmdeploy {

using torch::jit::Graph;
const std::shared_ptr<Graph>& required_passes(const std::shared_ptr<Graph>& graph) {
RemoveExpands(graph);
CanonicalizeOps(graph);
EliminateDeadCode(graph);
return graph;
}

Module optimize_for_torchscript(const Module& model) {
auto frozen_model = freeze_module(model);
auto graph = frozen_model.get_method("forward").graph();
Expand All @@ -21,6 +38,12 @@ Module optimize_for_torchscript(const Module& model) {
FrozenLinearTranspose(graph);
#endif

graph = required_passes(graph);
EliminateCommonSubexpression(graph);
PeepholeOptimize(graph);
ConstantPropagation(graph);
ConstantPooling(graph);

// TODO: add more custom passes

return frozen_model;
Expand Down
1 change: 1 addition & 0 deletions csrc/backend_ops/torchscript/optimizer/optimizer.h
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
// Copyright (c) OpenMMLab. All rights reserved.
#include <torch/script.h>

namespace mmdeploy {
Expand Down
2 changes: 1 addition & 1 deletion csrc/codebase/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ namespace mmdeploy {
class Context {
public:
explicit Context(const Value& config) {
DEBUG("config: {}", cfg);
DEBUG("config: {}", config);
device_ = config["context"]["device"].get<Device>();
stream_ = config["context"]["stream"].get<Stream>();
}
Expand Down
3 changes: 1 addition & 2 deletions csrc/codebase/mmdet/object_detection.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,7 @@ Result<DetectorOutput> ResizeBBox::GetBBoxes(const Value& prep_res, const Tensor
rect[3] - rect[1]);
continue;
}
DEBUG("remap left {}, top {}, right {}, bottom {}", rect.left, rect.top, rect.right,
rect.bottom);
DEBUG("remap left {}, top {}, right {}, bottom {}", rect[0], rect[1], rect[2], rect[3]);
DetectorOutput::Detection det{};
det.index = i;
det.label_id = static_cast<int>(*labels_ptr);
Expand Down
4 changes: 2 additions & 2 deletions demo/csrc/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ find_package(MMDeploy REQUIRED)

function(add_example name)
add_executable(${name} ${name}.cpp)
target_link_libraries(${name} ${MMDeploy_LIBS} opencv_imgcodecs
opencv_imgproc opencv_core)
target_link_libraries(${name} ${MMDeploy_LIBS} -Wl,--disable-new-dtags
opencv_imgcodecs opencv_imgproc opencv_core)
endfunction()

add_example(image_classification)
Expand Down
34 changes: 32 additions & 2 deletions demo/csrc/object_detection.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,42 @@ int main(int argc, char *argv[]) {
return 1;
}

fprintf(stderr, "bbox_count=%d\n", *res_count);
fprintf(stdout, "bbox_count=%d\n", *res_count);

for (int i = 0; i < *res_count; ++i) {
const auto &box = bboxes[i].bbox;
fprintf(stderr, "box %d, left=%.2f, top=%.2f, right=%.2f, bottom=%.2f, label=%d, score=%.4f\n",
const auto &mask = bboxes[i].mask;

fprintf(stdout, "box %d, left=%.2f, top=%.2f, right=%.2f, bottom=%.2f, label=%d, score=%.4f\n",
i, box.left, box.top, box.right, box.bottom, bboxes[i].label_id, bboxes[i].score);

// skip detections with invalid bbox size (bbox height or width < 1)
if ((box.right - box.left) < 1 || (box.bottom - box.top) < 1) {
continue;
}

// skip detections less than specified score threshold
if (bboxes[i].score < 0.1) {
continue;
}

// generate mask overlay if model exports masks
if (mask != nullptr) {
fprintf(stdout, "mask %d, height=%d, width=%d\n", i, mask->height, mask->width);

cv::Mat imgMask(mask->height, mask->width, CV_8UC1, &mask->data[0]);
auto x0 = std::max(std::floor(box.left) - 1, 0.f);
auto y0 = std::max(std::floor(box.top) - 1, 0.f);
cv::Rect roi((int)x0, (int)y0, mask->width, mask->height);

// split the RGB channels, overlay mask to a specific color channel
cv::Mat ch[3];
split(img, ch);
int col = 0; // int col = i % 3;
cv::bitwise_or(imgMask, ch[col](roi), ch[col](roi));
merge(ch, 3, img);
}

cv::rectangle(img, cv::Point{(int)box.left, (int)box.top},
cv::Point{(int)box.right, (int)box.bottom}, cv::Scalar{0, 255, 0});
}
Expand Down
105 changes: 105 additions & 0 deletions docker/CPU/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
FROM openvino/ubuntu18_dev:2021.4.2
ARG PYTHON_VERSION=3.7
ARG TORCH_VERSION=1.8.0
ARG TORCHVISION_VERSION=0.9.0
ARG ONNXRUNTIME_VERSION=1.8.1
ARG MMCV_VERSION=1.4.0
ARG CMAKE_VERSION=3.20.0
USER root
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
libopencv-dev libspdlog-dev \
gnupg \
libssl-dev \
libprotobuf-dev protobuf-compiler \
build-essential \
libjpeg-dev \
libpng-dev \
ccache \
cmake \
gcc \
g++ \
git \
vim \
wget \
curl \
&& rm -rf /var/lib/apt/lists/*

RUN curl -fsSL -v -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda install -y python=${PYTHON_VERSION} conda-build pyyaml numpy ipython cython typing typing_extensions mkl mkl-include ninja && \
/opt/conda/bin/conda clean -ya

### pytorch
RUN /opt/conda/bin/pip install torch==${TORCH_VERSION}+cpu torchvision==${TORCHVISION_VERSION}+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html
ENV PATH /opt/conda/bin:$PATH

### install open-mim
RUN /opt/conda/bin/pip install mmcv-full==${MMCV_VERSION} -f https://download.openmmlab.com/mmcv/dist/cpu/torch${TORCH_VERSION}/index.html

WORKDIR /root/workspace

### get onnxruntime
RUN wget https://github.com/microsoft/onnxruntime/releases/download/v${ONNXRUNTIME_VERSION}/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz \
&& tar -zxvf onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz

ENV ONNXRUNTIME_DIR=/root/workspace/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}

### update cmake to 20
RUN wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz &&\
tar -zxvf cmake-${CMAKE_VERSION}.tar.gz &&\
cd cmake-${CMAKE_VERSION} &&\
./bootstrap &&\
make &&\
make install

### install onnxruntme and openvino
RUN /opt/conda/bin/pip install onnxruntime==${ONNXRUNTIME_VERSION} openvino-dev

### build ncnn
RUN git clone https://github.com/Tencent/ncnn.git &&\
cd ncnn &&\
export NCNN_DIR=$(pwd) &&\
git submodule update --init &&\
mkdir -p build && cd build &&\
cmake -DNCNN_VULKAN=OFF -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON -DNCNN_PYTHON=ON -DNCNN_BUILD_TOOLS=ON -DNCNN_BUILD_BENCHMARK=ON -DNCNN_BUILD_TESTS=ON .. &&\
make install &&\
cd /root/workspace/ncnn/python &&\
pip install -e .

### install mmdeploy
WORKDIR /root/workspace
ARG VERSION
RUN git clone https://github.com/open-mmlab/mmdeploy.git &&\
cd mmdeploy &&\
if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on master" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\
git submodule update --init --recursive &&\
rm -rf build &&\
mkdir build &&\
cd build &&\
cmake -DMMDEPLOY_TARGET_BACKENDS=ncnn -Dncnn_DIR=/root/workspace/ncnn/build/install/lib/cmake/ncnn .. &&\
make -j$(nproc) &&\
cmake -DMMDEPLOY_TARGET_BACKENDS=ort .. &&\
make -j$(nproc) &&\
cd .. &&\
pip install -e .

### build SDK
ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64:${LD_LIBRARY_PATH}"
RUN cd mmdeploy && rm -rf build/CM* && mkdir -p build && cd build && cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
-Dncnn_DIR=/root/workspace/ncnn/build/install/lib/cmake/ncnn \
-DInferenceEngine_DIR=/opt/intel/openvino/deployment_tools/inference_engine/share \
-DMMDEPLOY_TARGET_DEVICES=cpu \
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_TARGET_BACKENDS="ort;ncnn;openvino" \
-DMMDEPLOY_CODEBASES=all &&\
cmake --build . -- -j$(nproc) && cmake --install . &&\
cd install/example && mkdir -p build && cd build &&\
cmake -DMMDeploy_DIR=/root/workspace/mmdeploy/build/install/lib/cmake/MMDeploy .. &&\
cmake --build . && export SPDLOG_LEVEL=warn &&\
if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for CPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for CPU devices successfully!" ; fi
90 changes: 90 additions & 0 deletions docker/GPU/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
FROM nvcr.io/nvidia/tensorrt:21.04-py3

ARG CUDA=10.2
ARG PYTHON_VERSION=3.8
ARG TORCH_VERSION=1.8.0
ARG TORCHVISION_VERSION=0.9.0
ARG ONNXRUNTIME_VERSION=1.8.1
ARG MMCV_VERSION=1.4.0
ARG CMAKE_VERSION=3.20.0
ENV FORCE_CUDA="1"

ENV DEBIAN_FRONTEND=noninteractive

### update apt and install libs
RUN apt-get update &&\
apt-get install -y vim libsm6 libxext6 libxrender-dev libgl1-mesa-glx git wget libssl-dev libopencv-dev libspdlog-dev --no-install-recommends &&\
rm -rf /var/lib/apt/lists/*

RUN curl -fsSL -v -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda install -y python=${PYTHON_VERSION} conda-build pyyaml numpy ipython cython typing typing_extensions mkl mkl-include ninja && \
/opt/conda/bin/conda clean -ya

### pytorch
RUN /opt/conda/bin/conda install pytorch==${TORCH_VERSION} torchvision==${TORCHVISION_VERSION} cudatoolkit=${CUDA} -c pytorch
ENV PATH /opt/conda/bin:$PATH

### install mmcv-full
RUN /opt/conda/bin/pip install mmcv-full==${MMCV_VERSION} -f https://download.openmmlab.com/mmcv/dist/cu${CUDA//./}/torch${TORCH_VERSION}/index.html

WORKDIR /root/workspace
### get onnxruntime
RUN wget https://github.com/microsoft/onnxruntime/releases/download/v${ONNXRUNTIME_VERSION}/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz \
&& tar -zxvf onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz &&\
pip install onnxruntime-gpu==${ONNXRUNTIME_VERSION}

### cp trt from pip to conda
RUN cp -r /usr/local/lib/python${PYTHON_VERSION}/dist-packages/tensorrt* /opt/conda/lib/python${PYTHON_VERSION}/site-packages/

### update cmake
RUN wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz &&\
tar -zxvf cmake-${CMAKE_VERSION}.tar.gz &&\
cd cmake-${CMAKE_VERSION} &&\
./bootstrap &&\
make &&\
make install

### install mmdeploy
ENV ONNXRUNTIME_DIR=/root/workspace/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}
ENV TENSORRT_DIR=/workspace/tensorrt
ARG VERSION
RUN git clone https://github.com/open-mmlab/mmdeploy &&\
cd mmdeploy &&\
if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on master" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\
git submodule update --init --recursive &&\
rm -rf build &&\
mkdir build &&\
cd build &&\
cmake -DMMDEPLOY_TARGET_BACKENDS=ort .. &&\
make -j$(nproc) &&\
cmake -DMMDEPLOY_TARGET_BACKENDS=trt .. &&\
make -j$(nproc) &&\
cd .. &&\
pip install -e .

### build sdk
RUN git clone https://github.com/openppl-public/ppl.cv.git &&\
cd ppl.cv &&\
./build.sh cuda
RUN cd /root/workspace/mmdeploy &&\
rm -rf build/CM* &&\
mkdir -p build && cd build &&\
cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++ \
-Dpplcv_DIR=/root/workspace/ppl.cv/cuda-build/install/lib/cmake/ppl \
-DTENSORRT_DIR=${TENSORRT_DIR} \
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \
-DMMDEPLOY_TARGET_BACKENDS="trt" \
-DMMDEPLOY_CODEBASES=all &&\
cmake --build . -- -j$(nproc) && cmake --install . &&\
cd install/example && mkdir -p build && cd build &&\
cmake -DMMDeploy_DIR=/root/workspace/mmdeploy/build/install/lib/cmake/MMDeploy .. &&\
cmake --build . && export SPDLOG_LEVEL=warn &&\
if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for GPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for GPU devices successfully!" ; fi

ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:${LD_LIBRARY_PATH}"
Loading

0 comments on commit a25f360

Please sign in to comment.