Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Update] Switch pip to mim in Docs and Dockerfile #1591

Merged
merged 3 commits into from
Jan 4, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docker/CPU/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
FROM openvino/ubuntu18_dev:2021.4.2
ARG PYTHON_VERSION=3.7
ARG PYTHON_VERSION=3.8
ARG TORCH_VERSION=1.10.0
ARG TORCHVISION_VERSION=0.11.0
ARG ONNXRUNTIME_VERSION=1.8.1
Expand Down Expand Up @@ -114,4 +114,4 @@ RUN cd mmdeploy && rm -rf build/CM* && mkdir -p build && cd build && cmake .. \
-DMMDEPLOY_CODEBASES=all &&\
cmake --build . -- -j$(nproc) && cmake --install . &&\
export SPDLOG_LEVEL=warn &&\
if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for CPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for CPU devices successfully!" ; fi
if [ -z ${VERSION} ] ; then echo "Built MMDeploy 1.x for CPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for CPU devices successfully!" ; fi
2 changes: 1 addition & 1 deletion docker/GPU/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,6 @@ RUN cd /root/workspace/mmdeploy &&\
-DMMDEPLOY_CODEBASES=all &&\
make -j$(nproc) && make install &&\
export SPDLOG_LEVEL=warn &&\
if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for GPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for GPU devices successfully!" ; fi
if [ -z ${VERSION} ] ; then echo "Built MMDeploy dev-1.x for GPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for GPU devices successfully!" ; fi

ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:${BACKUP_LD_LIBRARY_PATH}"
5 changes: 3 additions & 2 deletions docs/en/01-how-to-build/linux-x86_64.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,8 @@ conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=11.1 -c pytorch -c c
export cu_version=cu111 # cuda 11.1
export torch_version=torch1.8
pip install -U openmim
mim install "mmcv>=2.0.0rc1"
mim install mmengine
mim install "mmcv>=2.0.0rc2"
</code></pre>
</td>
</tr>
Expand Down Expand Up @@ -326,7 +327,7 @@ Please check [cmake build option](cmake_option.md).

```bash
cd ${MMDEPLOY_DIR}
pip install -e .
mim install -e .
```

**Note**
Expand Down
5 changes: 3 additions & 2 deletions docs/en/01-how-to-build/macos-arm64.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,8 @@ Please refer to [get_started](../get_started.md) to install conda.
# install pytorch & mmcv
conda install pytorch==1.9.0 torchvision==0.10.0 -c pytorch
pip install -U openmim
mim install "mmcv>=2.0.0rc1"
mim install mmengine
mim install "mmcv>=2.0.0rc2"
```

### Install Dependencies for SDK
Expand Down Expand Up @@ -146,7 +147,7 @@ conda install grpcio

```bash
cd ${MMDEPLOY_DIR}
pip install -v -e .
mim install -v -e .
```

**Note**
Expand Down
11 changes: 6 additions & 5 deletions docs/en/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ We recommend that users follow our best practices installing MMDeploy.

```shell
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.0rc2"
```

Expand Down Expand Up @@ -172,12 +173,12 @@ Based on the above settings, we provide an example to convert the Faster R-CNN i

```shell
# clone mmdeploy to get the deployment config. `--recursive` is not necessary
git clone https://github.com/open-mmlab/mmdeploy.git
git clone -b dev-1.x https://github.com/open-mmlab/mmdeploy.git

# clone mmdetection repo. We have to use the config file to build PyTorch nn module
git clone https://github.com/open-mmlab/mmdetection.git
git clone -b 3.x https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
mim install -v -e .
cd ..

# download Faster R-CNN checkpoint
Expand All @@ -186,7 +187,7 @@ wget -P checkpoints https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/
# run the command to start model conversion
python mmdeploy/tools/deploy.py \
mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
mmdetection/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
mmdetection/demo/demo.jpg \
--work-dir mmdeploy_model/faster-rcnn \
Expand All @@ -201,7 +202,7 @@ For more details about model conversion, you can read [how_to_convert_model](02-

```{tip}
If MMDeploy-ONNXRuntime prebuilt package is installed, you can convert the above model to onnx model and perform ONNX Runtime inference
just by 'changing detection_tensorrt_dynamic-320x320-1344x1344.py' to 'detection_onnxruntime_dynamic.py' and making '--device' as 'cpu'.
just by changing 'detection_tensorrt_dynamic-320x320-1344x1344.py' to 'detection_onnxruntime_dynamic.py' and making '--device' as 'cpu'.
```

## Inference Model
Expand Down
5 changes: 3 additions & 2 deletions docs/zh_cn/01-how-to-build/linux-x86_64.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,8 @@ conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=11.1 -c pytorch -c c
export cu_version=cu111 # cuda 11.1
export torch_version=torch1.8
pip install -U openmim
mim install "mmcv>=2.0.0rc1"
mim install mmengine
mim install "mmcv>=2.0.0rc2"
</code></pre>
</td>
</tr>
Expand Down Expand Up @@ -323,7 +324,7 @@ export MMDEPLOY_DIR=$(pwd)

```bash
cd ${MMDEPLOY_DIR}
pip install -e .
mim install -e .
```

**注意**
Expand Down
5 changes: 3 additions & 2 deletions docs/zh_cn/01-how-to-build/macos-arm64.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,8 @@
# install pytoch & mmcv
conda install pytorch==1.9.0 torchvision==0.10.0 -c pytorch
pip install -U openmim
mim install "mmcv>=2.0.0rc1"
mim install mmengine
mim install "mmcv>=2.0.0rc2"
```

#### 安装 MMDeploy SDK 依赖
Expand Down Expand Up @@ -147,7 +148,7 @@ conda install grpcio

```bash
cd ${MMDEPLOY_DIR}
pip install -v -e .
mim install -v -e .
```

**注意**
Expand Down
10 changes: 5 additions & 5 deletions docs/zh_cn/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,13 +167,13 @@ export LD_LIBRARY_PATH=$CUDNN_DIR/lib64:$LD_LIBRARY_PATH
以 [MMDetection](https://github.com/open-mmlab/mmdetection) 中的 `Faster R-CNN` 为例,我们可以使用如下命令,将 PyTorch 模型转换为 TenorRT 模型,从而部署到 NVIDIA GPU 上.

```shell
# 克隆 mmdeploy 仓库。转换时,需要使用 mmdeploy 仓库中的配置文件,建立转换流水线
git clone --recursive https://github.com/open-mmlab/mmdeploy.git
# 克隆 mmdeploy 仓库。转换时,需要使用 mmdeploy 仓库中的配置文件,建立转换流水线, `--recursive` 不是必须的
git clone -b dev-1.x --recursive https://github.com/open-mmlab/mmdeploy.git

# 安装 mmdetection。转换时,需要使用 mmdetection 仓库中的模型配置文件,构建 PyTorch nn module
git clone https://github.com/open-mmlab/mmdetection.git
git clone -b 3.x https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
mim install -v -e .
cd ..

# 下载 Faster R-CNN 模型权重
Expand All @@ -182,7 +182,7 @@ wget -P checkpoints https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/
# 执行转换命令,实现端到端的转换
python mmdeploy/tools/deploy.py \
mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
mmdetection/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
mmdetection/demo/demo.jpg \
--work-dir mmdeploy_model/faster-rcnn \
Expand Down